00:00:00.001 Started by upstream project "autotest-per-patch" build number 132818 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.081 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.082 The recommended git tool is: git 00:00:00.082 using credential 00000000-0000-0000-0000-000000000002 00:00:00.083 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.150 Fetching changes from the remote Git repository 00:00:00.152 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.217 Using shallow fetch with depth 1 00:00:00.217 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.217 > git --version # timeout=10 00:00:00.279 > git --version # 'git version 2.39.2' 00:00:00.279 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.314 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.314 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.930 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.947 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.958 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.959 > git config core.sparsecheckout # timeout=10 00:00:07.969 > git read-tree -mu HEAD # timeout=10 00:00:07.985 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.007 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.007 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.099 [Pipeline] Start of Pipeline 00:00:08.110 [Pipeline] library 00:00:08.111 Loading library shm_lib@master 00:00:08.111 Library shm_lib@master is cached. Copying from home. 00:00:08.126 [Pipeline] node 00:00:08.137 Running on WFP37 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:08.138 [Pipeline] { 00:00:08.148 [Pipeline] catchError 00:00:08.149 [Pipeline] { 00:00:08.159 [Pipeline] wrap 00:00:08.165 [Pipeline] { 00:00:08.171 [Pipeline] stage 00:00:08.172 [Pipeline] { (Prologue) 00:00:08.386 [Pipeline] sh 00:00:08.669 + logger -p user.info -t JENKINS-CI 00:00:08.687 [Pipeline] echo 00:00:08.689 Node: WFP37 00:00:08.695 [Pipeline] sh 00:00:08.995 [Pipeline] setCustomBuildProperty 00:00:09.005 [Pipeline] echo 00:00:09.006 Cleanup processes 00:00:09.011 [Pipeline] sh 00:00:09.297 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.297 502720 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.309 [Pipeline] sh 00:00:09.594 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.594 ++ grep -v 'sudo pgrep' 00:00:09.594 ++ awk '{print $1}' 00:00:09.594 + sudo kill -9 00:00:09.594 + true 00:00:09.609 [Pipeline] cleanWs 00:00:09.618 [WS-CLEANUP] Deleting project workspace... 00:00:09.618 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.625 [WS-CLEANUP] done 00:00:09.630 [Pipeline] setCustomBuildProperty 00:00:09.646 [Pipeline] sh 00:00:09.929 + sudo git config --global --replace-all safe.directory '*' 00:00:10.034 [Pipeline] httpRequest 00:00:10.435 [Pipeline] echo 00:00:10.436 Sorcerer 10.211.164.112 is alive 00:00:10.444 [Pipeline] retry 00:00:10.445 [Pipeline] { 00:00:10.459 [Pipeline] httpRequest 00:00:10.463 HttpMethod: GET 00:00:10.463 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.464 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.481 Response Code: HTTP/1.1 200 OK 00:00:10.481 Success: Status code 200 is in the accepted range: 200,404 00:00:10.481 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.598 [Pipeline] } 00:00:14.615 [Pipeline] // retry 00:00:14.622 [Pipeline] sh 00:00:14.908 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.925 [Pipeline] httpRequest 00:00:15.327 [Pipeline] echo 00:00:15.328 Sorcerer 10.211.164.112 is alive 00:00:15.336 [Pipeline] retry 00:00:15.338 [Pipeline] { 00:00:15.351 [Pipeline] httpRequest 00:00:15.355 HttpMethod: GET 00:00:15.356 URL: http://10.211.164.112/packages/spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:00:15.356 Sending request to url: http://10.211.164.112/packages/spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:00:15.376 Response Code: HTTP/1.1 200 OK 00:00:15.377 Success: Status code 200 is in the accepted range: 200,404 00:00:15.377 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:02:46.908 [Pipeline] } 00:02:46.925 [Pipeline] // retry 00:02:46.931 [Pipeline] sh 00:02:47.214 + tar --no-same-owner -xf spdk_86d35c37afb5a441206b26f894d7511170c8c587.tar.gz 00:02:49.759 [Pipeline] sh 00:02:50.041 + git -C spdk log --oneline -n5 00:02:50.041 86d35c37a bdev: simplify bdev_reset_freeze_channel 00:02:50.041 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:02:50.041 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:02:50.041 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:02:50.041 0ea9ac02f accel/mlx5: Create pool of UMRs 00:02:50.051 [Pipeline] } 00:02:50.063 [Pipeline] // stage 00:02:50.072 [Pipeline] stage 00:02:50.075 [Pipeline] { (Prepare) 00:02:50.089 [Pipeline] writeFile 00:02:50.103 [Pipeline] sh 00:02:50.386 + logger -p user.info -t JENKINS-CI 00:02:50.399 [Pipeline] sh 00:02:50.682 + logger -p user.info -t JENKINS-CI 00:02:50.694 [Pipeline] sh 00:02:50.977 + cat autorun-spdk.conf 00:02:50.977 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:50.977 SPDK_TEST_NVMF=1 00:02:50.977 SPDK_TEST_NVME_CLI=1 00:02:50.977 SPDK_TEST_NVMF_NICS=mlx5 00:02:50.977 SPDK_RUN_UBSAN=1 00:02:50.977 NET_TYPE=phy 00:02:50.983 RUN_NIGHTLY=0 00:02:50.986 [Pipeline] readFile 00:02:51.002 [Pipeline] withEnv 00:02:51.004 [Pipeline] { 00:02:51.012 [Pipeline] sh 00:02:51.291 + set -ex 00:02:51.291 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:02:51.291 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:51.291 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:51.291 ++ SPDK_TEST_NVMF=1 00:02:51.291 ++ SPDK_TEST_NVME_CLI=1 00:02:51.291 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:51.291 ++ SPDK_RUN_UBSAN=1 00:02:51.291 ++ NET_TYPE=phy 00:02:51.291 ++ RUN_NIGHTLY=0 00:02:51.291 + case $SPDK_TEST_NVMF_NICS in 00:02:51.291 + DRIVERS=mlx5_ib 00:02:51.291 + [[ -n mlx5_ib ]] 00:02:51.291 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:51.291 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:57.862 rmmod: ERROR: Module irdma is not currently loaded 00:02:57.862 rmmod: ERROR: Module i40iw is not currently loaded 00:02:57.862 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:57.862 + true 00:02:57.862 + for D in $DRIVERS 00:02:57.862 + sudo modprobe mlx5_ib 00:02:57.862 + exit 0 00:02:57.871 [Pipeline] } 00:02:57.886 [Pipeline] // withEnv 00:02:57.891 [Pipeline] } 00:02:57.905 [Pipeline] // stage 00:02:57.915 [Pipeline] catchError 00:02:57.917 [Pipeline] { 00:02:57.931 [Pipeline] timeout 00:02:57.931 Timeout set to expire in 1 hr 0 min 00:02:57.933 [Pipeline] { 00:02:57.947 [Pipeline] stage 00:02:57.949 [Pipeline] { (Tests) 00:02:57.963 [Pipeline] sh 00:02:58.248 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:02:58.248 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:02:58.248 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:02:58.248 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:02:58.248 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:58.248 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:02:58.248 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:02:58.248 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:58.248 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:02:58.248 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:58.248 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:02:58.248 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:02:58.248 + source /etc/os-release 00:02:58.248 ++ NAME='Fedora Linux' 00:02:58.248 ++ VERSION='39 (Cloud Edition)' 00:02:58.248 ++ ID=fedora 00:02:58.248 ++ VERSION_ID=39 00:02:58.248 ++ VERSION_CODENAME= 00:02:58.248 ++ PLATFORM_ID=platform:f39 00:02:58.248 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:58.248 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:58.248 ++ LOGO=fedora-logo-icon 00:02:58.248 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:58.248 ++ HOME_URL=https://fedoraproject.org/ 00:02:58.248 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:58.248 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:58.248 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:58.248 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:58.248 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:58.248 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:58.248 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:58.248 ++ SUPPORT_END=2024-11-12 00:02:58.248 ++ VARIANT='Cloud Edition' 00:02:58.248 ++ VARIANT_ID=cloud 00:02:58.248 + uname -a 00:02:58.248 Linux spdk-wfp-37 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:58.248 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:00.784 Hugepages 00:03:00.784 node hugesize free / total 00:03:00.784 node0 1048576kB 0 / 0 00:03:00.784 node0 2048kB 0 / 0 00:03:00.784 node1 1048576kB 0 / 0 00:03:00.784 node1 2048kB 0 / 0 00:03:00.784 00:03:00.784 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:00.784 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:00.784 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:00.784 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:00.784 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:00.784 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:00.784 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:00.784 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:00.784 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:00.784 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:00.784 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:00.784 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:00.784 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:00.784 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:00.784 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:00.784 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:00.784 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:00.784 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:00.784 + rm -f /tmp/spdk-ld-path 00:03:00.784 + source autorun-spdk.conf 00:03:00.784 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:00.784 ++ SPDK_TEST_NVMF=1 00:03:00.784 ++ SPDK_TEST_NVME_CLI=1 00:03:00.784 ++ SPDK_TEST_NVMF_NICS=mlx5 00:03:00.784 ++ SPDK_RUN_UBSAN=1 00:03:00.784 ++ NET_TYPE=phy 00:03:00.784 ++ RUN_NIGHTLY=0 00:03:00.784 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:00.784 + [[ -n '' ]] 00:03:00.784 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:00.784 + for M in /var/spdk/build-*-manifest.txt 00:03:00.784 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:00.784 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:03:00.784 + for M in /var/spdk/build-*-manifest.txt 00:03:00.784 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:00.784 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:03:01.044 + for M in /var/spdk/build-*-manifest.txt 00:03:01.044 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:01.044 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:03:01.044 ++ uname 00:03:01.044 + [[ Linux == \L\i\n\u\x ]] 00:03:01.044 + sudo dmesg -T 00:03:01.044 + sudo dmesg --clear 00:03:01.044 + dmesg_pid=504175 00:03:01.044 + [[ Fedora Linux == FreeBSD ]] 00:03:01.044 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:01.044 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:01.044 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:01.044 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:01.044 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:01.044 + [[ -x /usr/src/fio-static/fio ]] 00:03:01.044 + export FIO_BIN=/usr/src/fio-static/fio 00:03:01.044 + FIO_BIN=/usr/src/fio-static/fio 00:03:01.044 + sudo dmesg -Tw 00:03:01.044 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:01.044 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:01.044 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:01.044 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:01.044 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:01.044 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:01.044 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:01.044 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:01.044 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:01.044 03:50:55 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:01.044 03:50:55 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:01.044 03:50:55 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:01.044 03:50:55 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:01.044 03:50:55 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:01.044 03:50:55 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:03:01.044 03:50:55 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:03:01.044 03:50:55 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:03:01.044 03:50:55 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ RUN_NIGHTLY=0 00:03:01.044 03:50:55 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:01.044 03:50:55 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:01.044 03:50:55 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:01.044 03:50:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:01.044 03:50:55 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:01.044 03:50:55 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:01.044 03:50:55 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:01.044 03:50:55 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:01.044 03:50:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.044 03:50:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.044 03:50:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.044 03:50:55 -- paths/export.sh@5 -- $ export PATH 00:03:01.044 03:50:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.044 03:50:55 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:01.044 03:50:55 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:01.044 03:50:55 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733799055.XXXXXX 00:03:01.044 03:50:55 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733799055.H7D7oB 00:03:01.044 03:50:55 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:01.044 03:50:55 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:01.044 03:50:55 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:03:01.044 03:50:55 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:01.044 03:50:55 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:01.044 03:50:55 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:01.044 03:50:55 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:01.044 03:50:55 -- common/autotest_common.sh@10 -- $ set +x 00:03:01.044 03:50:55 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:03:01.044 03:50:55 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:01.044 03:50:55 -- pm/common@17 -- $ local monitor 00:03:01.044 03:50:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.044 03:50:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.044 03:50:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.044 03:50:55 -- pm/common@21 -- $ date +%s 00:03:01.044 03:50:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.044 03:50:55 -- pm/common@21 -- $ date +%s 00:03:01.044 03:50:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733799055 00:03:01.044 03:50:55 -- pm/common@25 -- $ sleep 1 00:03:01.044 03:50:55 -- pm/common@21 -- $ date +%s 00:03:01.044 03:50:55 -- pm/common@21 -- $ date +%s 00:03:01.044 03:50:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733799055 00:03:01.044 03:50:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733799055 00:03:01.044 03:50:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733799055 00:03:01.303 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733799055_collect-cpu-load.pm.log 00:03:01.303 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733799055_collect-vmstat.pm.log 00:03:01.303 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733799055_collect-cpu-temp.pm.log 00:03:01.303 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733799055_collect-bmc-pm.bmc.pm.log 00:03:02.241 03:50:56 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:02.241 03:50:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:02.241 03:50:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:02.241 03:50:56 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:02.241 03:50:56 -- spdk/autobuild.sh@16 -- $ date -u 00:03:02.241 Tue Dec 10 02:50:56 AM UTC 2024 00:03:02.241 03:50:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:02.241 v25.01-pre-312-g86d35c37a 00:03:02.241 03:50:56 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:02.241 03:50:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:02.241 03:50:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:02.241 03:50:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:02.241 03:50:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:02.241 03:50:56 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.241 ************************************ 00:03:02.241 START TEST ubsan 00:03:02.241 ************************************ 00:03:02.241 03:50:56 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:02.241 using ubsan 00:03:02.241 00:03:02.241 real 0m0.000s 00:03:02.241 user 0m0.000s 00:03:02.241 sys 0m0.000s 00:03:02.241 03:50:56 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:02.241 03:50:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:02.241 ************************************ 00:03:02.241 END TEST ubsan 00:03:02.241 ************************************ 00:03:02.241 03:50:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:02.241 03:50:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:02.241 03:50:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:02.241 03:50:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:02.241 03:50:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:02.241 03:50:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:02.241 03:50:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:02.241 03:50:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:02.241 03:50:56 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:03:02.241 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:03:02.241 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:03:02.809 Using 'verbs' RDMA provider 00:03:15.583 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:25.559 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:25.817 Creating mk/config.mk...done. 00:03:25.817 Creating mk/cc.flags.mk...done. 00:03:25.817 Type 'make' to build. 00:03:25.817 03:51:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:03:25.817 03:51:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:25.817 03:51:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:25.817 03:51:20 -- common/autotest_common.sh@10 -- $ set +x 00:03:25.817 ************************************ 00:03:25.817 START TEST make 00:03:25.817 ************************************ 00:03:25.817 03:51:20 make -- common/autotest_common.sh@1129 -- $ make -j112 00:03:26.386 make[1]: Nothing to be done for 'all'. 00:03:34.587 The Meson build system 00:03:34.587 Version: 1.5.0 00:03:34.588 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:03:34.588 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:03:34.588 Build type: native build 00:03:34.588 Program cat found: YES (/usr/bin/cat) 00:03:34.588 Project name: DPDK 00:03:34.588 Project version: 24.03.0 00:03:34.588 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:34.588 C linker for the host machine: cc ld.bfd 2.40-14 00:03:34.588 Host machine cpu family: x86_64 00:03:34.588 Host machine cpu: x86_64 00:03:34.588 Message: ## Building in Developer Mode ## 00:03:34.588 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:34.588 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:34.588 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:34.588 Program python3 found: YES (/usr/bin/python3) 00:03:34.588 Program cat found: YES (/usr/bin/cat) 00:03:34.588 Compiler for C supports arguments -march=native: YES 00:03:34.588 Checking for size of "void *" : 8 00:03:34.588 Checking for size of "void *" : 8 (cached) 00:03:34.588 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:34.588 Library m found: YES 00:03:34.588 Library numa found: YES 00:03:34.588 Has header "numaif.h" : YES 00:03:34.588 Library fdt found: NO 00:03:34.588 Library execinfo found: NO 00:03:34.588 Has header "execinfo.h" : YES 00:03:34.588 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:34.588 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:34.588 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:34.588 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:34.588 Run-time dependency openssl found: YES 3.1.1 00:03:34.588 Run-time dependency libpcap found: YES 1.10.4 00:03:34.588 Has header "pcap.h" with dependency libpcap: YES 00:03:34.588 Compiler for C supports arguments -Wcast-qual: YES 00:03:34.588 Compiler for C supports arguments -Wdeprecated: YES 00:03:34.588 Compiler for C supports arguments -Wformat: YES 00:03:34.588 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:34.588 Compiler for C supports arguments -Wformat-security: NO 00:03:34.588 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:34.588 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:34.588 Compiler for C supports arguments -Wnested-externs: YES 00:03:34.588 Compiler for C supports arguments -Wold-style-definition: YES 00:03:34.588 Compiler for C supports arguments -Wpointer-arith: YES 00:03:34.588 Compiler for C supports arguments -Wsign-compare: YES 00:03:34.588 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:34.588 Compiler for C supports arguments -Wundef: YES 00:03:34.588 Compiler for C supports arguments -Wwrite-strings: YES 00:03:34.588 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:34.588 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:34.588 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:34.588 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:34.588 Program objdump found: YES (/usr/bin/objdump) 00:03:34.588 Compiler for C supports arguments -mavx512f: YES 00:03:34.588 Checking if "AVX512 checking" compiles: YES 00:03:34.588 Fetching value of define "__SSE4_2__" : 1 00:03:34.588 Fetching value of define "__AES__" : 1 00:03:34.588 Fetching value of define "__AVX__" : 1 00:03:34.588 Fetching value of define "__AVX2__" : 1 00:03:34.588 Fetching value of define "__AVX512BW__" : 1 00:03:34.588 Fetching value of define "__AVX512CD__" : 1 00:03:34.588 Fetching value of define "__AVX512DQ__" : 1 00:03:34.588 Fetching value of define "__AVX512F__" : 1 00:03:34.588 Fetching value of define "__AVX512VL__" : 1 00:03:34.588 Fetching value of define "__PCLMUL__" : 1 00:03:34.588 Fetching value of define "__RDRND__" : 1 00:03:34.588 Fetching value of define "__RDSEED__" : 1 00:03:34.588 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:34.588 Fetching value of define "__znver1__" : (undefined) 00:03:34.588 Fetching value of define "__znver2__" : (undefined) 00:03:34.588 Fetching value of define "__znver3__" : (undefined) 00:03:34.588 Fetching value of define "__znver4__" : (undefined) 00:03:34.588 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:34.588 Message: lib/log: Defining dependency "log" 00:03:34.588 Message: lib/kvargs: Defining dependency "kvargs" 00:03:34.588 Message: lib/telemetry: Defining dependency "telemetry" 00:03:34.588 Checking for function "getentropy" : NO 00:03:34.588 Message: lib/eal: Defining dependency "eal" 00:03:34.588 Message: lib/ring: Defining dependency "ring" 00:03:34.588 Message: lib/rcu: Defining dependency "rcu" 00:03:34.588 Message: lib/mempool: Defining dependency "mempool" 00:03:34.588 Message: lib/mbuf: Defining dependency "mbuf" 00:03:34.588 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:34.588 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:34.588 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:34.588 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:34.588 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:34.588 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:34.588 Compiler for C supports arguments -mpclmul: YES 00:03:34.588 Compiler for C supports arguments -maes: YES 00:03:34.588 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:34.588 Compiler for C supports arguments -mavx512bw: YES 00:03:34.588 Compiler for C supports arguments -mavx512dq: YES 00:03:34.588 Compiler for C supports arguments -mavx512vl: YES 00:03:34.588 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:34.588 Compiler for C supports arguments -mavx2: YES 00:03:34.588 Compiler for C supports arguments -mavx: YES 00:03:34.588 Message: lib/net: Defining dependency "net" 00:03:34.588 Message: lib/meter: Defining dependency "meter" 00:03:34.588 Message: lib/ethdev: Defining dependency "ethdev" 00:03:34.588 Message: lib/pci: Defining dependency "pci" 00:03:34.588 Message: lib/cmdline: Defining dependency "cmdline" 00:03:34.588 Message: lib/hash: Defining dependency "hash" 00:03:34.588 Message: lib/timer: Defining dependency "timer" 00:03:34.588 Message: lib/compressdev: Defining dependency "compressdev" 00:03:34.588 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:34.588 Message: lib/dmadev: Defining dependency "dmadev" 00:03:34.588 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:34.588 Message: lib/power: Defining dependency "power" 00:03:34.588 Message: lib/reorder: Defining dependency "reorder" 00:03:34.588 Message: lib/security: Defining dependency "security" 00:03:34.588 Has header "linux/userfaultfd.h" : YES 00:03:34.588 Has header "linux/vduse.h" : YES 00:03:34.588 Message: lib/vhost: Defining dependency "vhost" 00:03:34.588 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:34.588 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:34.588 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:34.588 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:34.588 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:34.588 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:34.588 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:34.588 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:34.588 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:34.588 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:34.588 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:34.588 Configuring doxy-api-html.conf using configuration 00:03:34.588 Configuring doxy-api-man.conf using configuration 00:03:34.588 Program mandb found: YES (/usr/bin/mandb) 00:03:34.588 Program sphinx-build found: NO 00:03:34.588 Configuring rte_build_config.h using configuration 00:03:34.588 Message: 00:03:34.588 ================= 00:03:34.588 Applications Enabled 00:03:34.588 ================= 00:03:34.588 00:03:34.588 apps: 00:03:34.588 00:03:34.588 00:03:34.588 Message: 00:03:34.588 ================= 00:03:34.588 Libraries Enabled 00:03:34.588 ================= 00:03:34.588 00:03:34.588 libs: 00:03:34.588 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:34.588 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:34.588 cryptodev, dmadev, power, reorder, security, vhost, 00:03:34.588 00:03:34.588 Message: 00:03:34.588 =============== 00:03:34.588 Drivers Enabled 00:03:34.588 =============== 00:03:34.588 00:03:34.588 common: 00:03:34.588 00:03:34.588 bus: 00:03:34.588 pci, vdev, 00:03:34.588 mempool: 00:03:34.588 ring, 00:03:34.588 dma: 00:03:34.588 00:03:34.588 net: 00:03:34.588 00:03:34.588 crypto: 00:03:34.588 00:03:34.588 compress: 00:03:34.588 00:03:34.588 vdpa: 00:03:34.588 00:03:34.588 00:03:34.588 Message: 00:03:34.588 ================= 00:03:34.588 Content Skipped 00:03:34.588 ================= 00:03:34.588 00:03:34.588 apps: 00:03:34.588 dumpcap: explicitly disabled via build config 00:03:34.588 graph: explicitly disabled via build config 00:03:34.588 pdump: explicitly disabled via build config 00:03:34.588 proc-info: explicitly disabled via build config 00:03:34.588 test-acl: explicitly disabled via build config 00:03:34.588 test-bbdev: explicitly disabled via build config 00:03:34.588 test-cmdline: explicitly disabled via build config 00:03:34.588 test-compress-perf: explicitly disabled via build config 00:03:34.588 test-crypto-perf: explicitly disabled via build config 00:03:34.588 test-dma-perf: explicitly disabled via build config 00:03:34.588 test-eventdev: explicitly disabled via build config 00:03:34.588 test-fib: explicitly disabled via build config 00:03:34.588 test-flow-perf: explicitly disabled via build config 00:03:34.588 test-gpudev: explicitly disabled via build config 00:03:34.588 test-mldev: explicitly disabled via build config 00:03:34.588 test-pipeline: explicitly disabled via build config 00:03:34.588 test-pmd: explicitly disabled via build config 00:03:34.588 test-regex: explicitly disabled via build config 00:03:34.588 test-sad: explicitly disabled via build config 00:03:34.588 test-security-perf: explicitly disabled via build config 00:03:34.588 00:03:34.588 libs: 00:03:34.589 argparse: explicitly disabled via build config 00:03:34.589 metrics: explicitly disabled via build config 00:03:34.589 acl: explicitly disabled via build config 00:03:34.589 bbdev: explicitly disabled via build config 00:03:34.589 bitratestats: explicitly disabled via build config 00:03:34.589 bpf: explicitly disabled via build config 00:03:34.589 cfgfile: explicitly disabled via build config 00:03:34.589 distributor: explicitly disabled via build config 00:03:34.589 efd: explicitly disabled via build config 00:03:34.589 eventdev: explicitly disabled via build config 00:03:34.589 dispatcher: explicitly disabled via build config 00:03:34.589 gpudev: explicitly disabled via build config 00:03:34.589 gro: explicitly disabled via build config 00:03:34.589 gso: explicitly disabled via build config 00:03:34.589 ip_frag: explicitly disabled via build config 00:03:34.589 jobstats: explicitly disabled via build config 00:03:34.589 latencystats: explicitly disabled via build config 00:03:34.589 lpm: explicitly disabled via build config 00:03:34.589 member: explicitly disabled via build config 00:03:34.589 pcapng: explicitly disabled via build config 00:03:34.589 rawdev: explicitly disabled via build config 00:03:34.589 regexdev: explicitly disabled via build config 00:03:34.589 mldev: explicitly disabled via build config 00:03:34.589 rib: explicitly disabled via build config 00:03:34.589 sched: explicitly disabled via build config 00:03:34.589 stack: explicitly disabled via build config 00:03:34.589 ipsec: explicitly disabled via build config 00:03:34.589 pdcp: explicitly disabled via build config 00:03:34.589 fib: explicitly disabled via build config 00:03:34.589 port: explicitly disabled via build config 00:03:34.589 pdump: explicitly disabled via build config 00:03:34.589 table: explicitly disabled via build config 00:03:34.589 pipeline: explicitly disabled via build config 00:03:34.589 graph: explicitly disabled via build config 00:03:34.589 node: explicitly disabled via build config 00:03:34.589 00:03:34.589 drivers: 00:03:34.589 common/cpt: not in enabled drivers build config 00:03:34.589 common/dpaax: not in enabled drivers build config 00:03:34.589 common/iavf: not in enabled drivers build config 00:03:34.589 common/idpf: not in enabled drivers build config 00:03:34.589 common/ionic: not in enabled drivers build config 00:03:34.589 common/mvep: not in enabled drivers build config 00:03:34.589 common/octeontx: not in enabled drivers build config 00:03:34.589 bus/auxiliary: not in enabled drivers build config 00:03:34.589 bus/cdx: not in enabled drivers build config 00:03:34.589 bus/dpaa: not in enabled drivers build config 00:03:34.589 bus/fslmc: not in enabled drivers build config 00:03:34.589 bus/ifpga: not in enabled drivers build config 00:03:34.589 bus/platform: not in enabled drivers build config 00:03:34.589 bus/uacce: not in enabled drivers build config 00:03:34.589 bus/vmbus: not in enabled drivers build config 00:03:34.589 common/cnxk: not in enabled drivers build config 00:03:34.589 common/mlx5: not in enabled drivers build config 00:03:34.589 common/nfp: not in enabled drivers build config 00:03:34.589 common/nitrox: not in enabled drivers build config 00:03:34.589 common/qat: not in enabled drivers build config 00:03:34.589 common/sfc_efx: not in enabled drivers build config 00:03:34.589 mempool/bucket: not in enabled drivers build config 00:03:34.589 mempool/cnxk: not in enabled drivers build config 00:03:34.589 mempool/dpaa: not in enabled drivers build config 00:03:34.589 mempool/dpaa2: not in enabled drivers build config 00:03:34.589 mempool/octeontx: not in enabled drivers build config 00:03:34.589 mempool/stack: not in enabled drivers build config 00:03:34.589 dma/cnxk: not in enabled drivers build config 00:03:34.589 dma/dpaa: not in enabled drivers build config 00:03:34.589 dma/dpaa2: not in enabled drivers build config 00:03:34.589 dma/hisilicon: not in enabled drivers build config 00:03:34.589 dma/idxd: not in enabled drivers build config 00:03:34.589 dma/ioat: not in enabled drivers build config 00:03:34.589 dma/skeleton: not in enabled drivers build config 00:03:34.589 net/af_packet: not in enabled drivers build config 00:03:34.589 net/af_xdp: not in enabled drivers build config 00:03:34.589 net/ark: not in enabled drivers build config 00:03:34.589 net/atlantic: not in enabled drivers build config 00:03:34.589 net/avp: not in enabled drivers build config 00:03:34.589 net/axgbe: not in enabled drivers build config 00:03:34.589 net/bnx2x: not in enabled drivers build config 00:03:34.589 net/bnxt: not in enabled drivers build config 00:03:34.589 net/bonding: not in enabled drivers build config 00:03:34.589 net/cnxk: not in enabled drivers build config 00:03:34.589 net/cpfl: not in enabled drivers build config 00:03:34.589 net/cxgbe: not in enabled drivers build config 00:03:34.589 net/dpaa: not in enabled drivers build config 00:03:34.589 net/dpaa2: not in enabled drivers build config 00:03:34.589 net/e1000: not in enabled drivers build config 00:03:34.589 net/ena: not in enabled drivers build config 00:03:34.589 net/enetc: not in enabled drivers build config 00:03:34.589 net/enetfec: not in enabled drivers build config 00:03:34.589 net/enic: not in enabled drivers build config 00:03:34.589 net/failsafe: not in enabled drivers build config 00:03:34.589 net/fm10k: not in enabled drivers build config 00:03:34.589 net/gve: not in enabled drivers build config 00:03:34.589 net/hinic: not in enabled drivers build config 00:03:34.589 net/hns3: not in enabled drivers build config 00:03:34.589 net/i40e: not in enabled drivers build config 00:03:34.589 net/iavf: not in enabled drivers build config 00:03:34.589 net/ice: not in enabled drivers build config 00:03:34.589 net/idpf: not in enabled drivers build config 00:03:34.589 net/igc: not in enabled drivers build config 00:03:34.589 net/ionic: not in enabled drivers build config 00:03:34.589 net/ipn3ke: not in enabled drivers build config 00:03:34.589 net/ixgbe: not in enabled drivers build config 00:03:34.589 net/mana: not in enabled drivers build config 00:03:34.589 net/memif: not in enabled drivers build config 00:03:34.589 net/mlx4: not in enabled drivers build config 00:03:34.589 net/mlx5: not in enabled drivers build config 00:03:34.589 net/mvneta: not in enabled drivers build config 00:03:34.589 net/mvpp2: not in enabled drivers build config 00:03:34.589 net/netvsc: not in enabled drivers build config 00:03:34.589 net/nfb: not in enabled drivers build config 00:03:34.589 net/nfp: not in enabled drivers build config 00:03:34.589 net/ngbe: not in enabled drivers build config 00:03:34.589 net/null: not in enabled drivers build config 00:03:34.589 net/octeontx: not in enabled drivers build config 00:03:34.589 net/octeon_ep: not in enabled drivers build config 00:03:34.589 net/pcap: not in enabled drivers build config 00:03:34.589 net/pfe: not in enabled drivers build config 00:03:34.589 net/qede: not in enabled drivers build config 00:03:34.589 net/ring: not in enabled drivers build config 00:03:34.589 net/sfc: not in enabled drivers build config 00:03:34.589 net/softnic: not in enabled drivers build config 00:03:34.589 net/tap: not in enabled drivers build config 00:03:34.589 net/thunderx: not in enabled drivers build config 00:03:34.589 net/txgbe: not in enabled drivers build config 00:03:34.589 net/vdev_netvsc: not in enabled drivers build config 00:03:34.589 net/vhost: not in enabled drivers build config 00:03:34.589 net/virtio: not in enabled drivers build config 00:03:34.589 net/vmxnet3: not in enabled drivers build config 00:03:34.589 raw/*: missing internal dependency, "rawdev" 00:03:34.589 crypto/armv8: not in enabled drivers build config 00:03:34.589 crypto/bcmfs: not in enabled drivers build config 00:03:34.589 crypto/caam_jr: not in enabled drivers build config 00:03:34.589 crypto/ccp: not in enabled drivers build config 00:03:34.589 crypto/cnxk: not in enabled drivers build config 00:03:34.589 crypto/dpaa_sec: not in enabled drivers build config 00:03:34.589 crypto/dpaa2_sec: not in enabled drivers build config 00:03:34.589 crypto/ipsec_mb: not in enabled drivers build config 00:03:34.589 crypto/mlx5: not in enabled drivers build config 00:03:34.589 crypto/mvsam: not in enabled drivers build config 00:03:34.589 crypto/nitrox: not in enabled drivers build config 00:03:34.589 crypto/null: not in enabled drivers build config 00:03:34.589 crypto/octeontx: not in enabled drivers build config 00:03:34.589 crypto/openssl: not in enabled drivers build config 00:03:34.589 crypto/scheduler: not in enabled drivers build config 00:03:34.589 crypto/uadk: not in enabled drivers build config 00:03:34.589 crypto/virtio: not in enabled drivers build config 00:03:34.589 compress/isal: not in enabled drivers build config 00:03:34.589 compress/mlx5: not in enabled drivers build config 00:03:34.589 compress/nitrox: not in enabled drivers build config 00:03:34.589 compress/octeontx: not in enabled drivers build config 00:03:34.589 compress/zlib: not in enabled drivers build config 00:03:34.589 regex/*: missing internal dependency, "regexdev" 00:03:34.589 ml/*: missing internal dependency, "mldev" 00:03:34.589 vdpa/ifc: not in enabled drivers build config 00:03:34.589 vdpa/mlx5: not in enabled drivers build config 00:03:34.589 vdpa/nfp: not in enabled drivers build config 00:03:34.589 vdpa/sfc: not in enabled drivers build config 00:03:34.589 event/*: missing internal dependency, "eventdev" 00:03:34.589 baseband/*: missing internal dependency, "bbdev" 00:03:34.589 gpu/*: missing internal dependency, "gpudev" 00:03:34.589 00:03:34.589 00:03:34.589 Build targets in project: 85 00:03:34.589 00:03:34.589 DPDK 24.03.0 00:03:34.589 00:03:34.589 User defined options 00:03:34.589 buildtype : debug 00:03:34.589 default_library : shared 00:03:34.589 libdir : lib 00:03:34.589 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:03:34.589 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:34.589 c_link_args : 00:03:34.589 cpu_instruction_set: native 00:03:34.589 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:34.589 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:34.589 enable_docs : false 00:03:34.589 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:34.589 enable_kmods : false 00:03:34.589 max_lcores : 128 00:03:34.589 tests : false 00:03:34.589 00:03:34.589 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:34.589 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:03:34.589 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:34.589 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:34.590 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:34.590 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:34.590 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:34.590 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:34.590 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:34.590 [8/268] Linking static target lib/librte_kvargs.a 00:03:34.590 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:34.590 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:34.590 [11/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:34.590 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:34.590 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:34.590 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:34.590 [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:34.590 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:34.590 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:34.590 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:34.590 [19/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:34.590 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:34.590 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:34.590 [22/268] Linking static target lib/librte_log.a 00:03:34.590 [23/268] Linking static target lib/librte_pci.a 00:03:34.590 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:34.590 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:34.590 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:34.590 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:34.590 [28/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:34.590 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:34.590 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:34.590 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:34.590 [32/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:34.590 [33/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:34.590 [34/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:34.590 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:34.590 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:34.590 [37/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:34.590 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:34.590 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:34.590 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:34.590 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:34.590 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:34.590 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:34.590 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:34.590 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:34.590 [46/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:34.590 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:34.590 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:34.590 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:34.590 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:34.590 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:34.590 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:34.590 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:34.590 [54/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:34.590 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:34.590 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:34.590 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:34.590 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:34.590 [59/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:34.590 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:34.590 [61/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:34.590 [62/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:34.590 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:34.590 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:34.590 [65/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:34.590 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:34.590 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:34.590 [68/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:34.850 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:34.850 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:34.850 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:34.850 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:34.850 [73/268] Linking static target lib/librte_ring.a 00:03:34.850 [74/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:34.850 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:34.850 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:34.850 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:34.850 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:34.850 [79/268] Linking static target lib/librte_telemetry.a 00:03:34.850 [80/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:34.850 [81/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:34.850 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:34.850 [83/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:34.850 [84/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:34.850 [85/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.850 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:34.850 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:34.850 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:34.850 [89/268] Linking static target lib/librte_meter.a 00:03:34.850 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:34.850 [91/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:34.850 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:34.850 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:34.850 [94/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:34.850 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:34.850 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:34.850 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:34.850 [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:34.850 [99/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:34.850 [100/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:34.850 [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:34.850 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:34.850 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:34.850 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:34.850 [105/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:34.850 [106/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.850 [107/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:34.850 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:34.850 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:34.850 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:34.850 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:34.850 [112/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:34.850 [113/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:34.850 [114/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:34.850 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:34.850 [116/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:34.850 [117/268] Linking static target lib/librte_cmdline.a 00:03:34.850 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:34.850 [119/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:34.850 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:34.850 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:34.850 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:34.850 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:34.850 [124/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:34.850 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:34.850 [126/268] Linking static target lib/librte_mempool.a 00:03:34.850 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:34.850 [128/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:34.850 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:34.850 [130/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:34.850 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:34.850 [132/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:34.850 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:34.850 [134/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:34.850 [135/268] Linking static target lib/librte_dmadev.a 00:03:34.850 [136/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:34.850 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:34.850 [138/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:34.850 [139/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:34.850 [140/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:34.850 [141/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:34.850 [142/268] Linking static target lib/librte_net.a 00:03:34.850 [143/268] Linking static target lib/librte_rcu.a 00:03:34.850 [144/268] Linking static target lib/librte_timer.a 00:03:34.850 [145/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:34.850 [146/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:34.850 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:34.850 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:34.850 [149/268] Linking static target lib/librte_compressdev.a 00:03:34.850 [150/268] Linking static target lib/librte_eal.a 00:03:34.850 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:35.109 [152/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.109 [153/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:35.109 [154/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:35.109 [155/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:35.109 [156/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:35.109 [157/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.109 [158/268] Linking static target lib/librte_mbuf.a 00:03:35.109 [159/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:35.109 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:35.109 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:35.109 [162/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.109 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:35.109 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:35.109 [165/268] Linking target lib/librte_log.so.24.1 00:03:35.109 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:35.109 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:35.109 [168/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:35.109 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:35.109 [170/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:35.109 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:35.109 [172/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:35.109 [173/268] Linking static target lib/librte_hash.a 00:03:35.109 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:35.109 [175/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:35.109 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:35.109 [177/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:35.109 [178/268] Linking static target lib/librte_power.a 00:03:35.109 [179/268] Linking static target lib/librte_reorder.a 00:03:35.109 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:35.109 [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:35.109 [182/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:35.109 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:35.109 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:35.109 [185/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.109 [186/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.109 [187/268] Linking target lib/librte_kvargs.so.24.1 00:03:35.109 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:35.369 [189/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:35.369 [190/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.369 [191/268] Linking target lib/librte_telemetry.so.24.1 00:03:35.369 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:35.369 [193/268] Linking static target lib/librte_security.a 00:03:35.369 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:35.369 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:35.369 [196/268] Linking static target lib/librte_cryptodev.a 00:03:35.369 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:35.369 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:35.369 [199/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:35.369 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:35.369 [201/268] Linking static target drivers/librte_bus_vdev.a 00:03:35.369 [202/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:35.369 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:35.369 [204/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:35.369 [205/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.369 [206/268] Linking static target drivers/librte_bus_pci.a 00:03:35.369 [207/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:35.369 [208/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:35.369 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:35.369 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:35.369 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:35.369 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:35.369 [213/268] Linking static target drivers/librte_mempool_ring.a 00:03:35.369 [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.628 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.628 [216/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.628 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.628 [218/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.628 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:35.628 [220/268] Linking static target lib/librte_ethdev.a 00:03:35.628 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.886 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.886 [223/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.886 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.886 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:35.886 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.145 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.082 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:37.082 [229/268] Linking static target lib/librte_vhost.a 00:03:37.082 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.459 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.736 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.319 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.577 [234/268] Linking target lib/librte_eal.so.24.1 00:03:44.577 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:44.577 [236/268] Linking target lib/librte_timer.so.24.1 00:03:44.577 [237/268] Linking target lib/librte_pci.so.24.1 00:03:44.577 [238/268] Linking target lib/librte_ring.so.24.1 00:03:44.577 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:44.577 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:44.577 [241/268] Linking target lib/librte_meter.so.24.1 00:03:44.837 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:44.837 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:44.837 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:44.837 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:44.837 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:44.837 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:44.837 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:44.837 [249/268] Linking target lib/librte_mempool.so.24.1 00:03:44.837 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:44.837 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:44.837 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:45.096 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:45.096 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:45.096 [255/268] Linking target lib/librte_net.so.24.1 00:03:45.096 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:03:45.096 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:45.096 [258/268] Linking target lib/librte_reorder.so.24.1 00:03:45.096 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:45.355 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:45.356 [261/268] Linking target lib/librte_hash.so.24.1 00:03:45.356 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:45.356 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:45.356 [264/268] Linking target lib/librte_security.so.24.1 00:03:45.356 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:45.356 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:45.356 [267/268] Linking target lib/librte_power.so.24.1 00:03:45.356 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:45.356 INFO: autodetecting backend as ninja 00:03:45.356 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:03:55.338 CC lib/ut_mock/mock.o 00:03:55.338 CC lib/log/log.o 00:03:55.338 CC lib/log/log_flags.o 00:03:55.338 CC lib/log/log_deprecated.o 00:03:55.338 CC lib/ut/ut.o 00:03:55.338 LIB libspdk_ut_mock.a 00:03:55.338 LIB libspdk_log.a 00:03:55.338 SO libspdk_ut_mock.so.6.0 00:03:55.338 SO libspdk_log.so.7.1 00:03:55.338 LIB libspdk_ut.a 00:03:55.338 SYMLINK libspdk_ut_mock.so 00:03:55.338 SO libspdk_ut.so.2.0 00:03:55.338 SYMLINK libspdk_log.so 00:03:55.338 SYMLINK libspdk_ut.so 00:03:55.338 CC lib/util/base64.o 00:03:55.338 CC lib/util/bit_array.o 00:03:55.338 CC lib/util/cpuset.o 00:03:55.338 CC lib/util/crc16.o 00:03:55.338 CC lib/util/crc32.o 00:03:55.338 CC lib/util/crc32c.o 00:03:55.338 CC lib/util/crc64.o 00:03:55.338 CC lib/util/crc32_ieee.o 00:03:55.338 CC lib/util/dif.o 00:03:55.338 CC lib/util/fd.o 00:03:55.338 CC lib/util/fd_group.o 00:03:55.338 CC lib/ioat/ioat.o 00:03:55.338 CC lib/util/file.o 00:03:55.338 CC lib/util/hexlify.o 00:03:55.338 CC lib/util/iov.o 00:03:55.338 CC lib/util/math.o 00:03:55.338 CC lib/util/net.o 00:03:55.338 CC lib/util/pipe.o 00:03:55.338 CC lib/util/strerror_tls.o 00:03:55.338 CC lib/util/string.o 00:03:55.338 CC lib/util/uuid.o 00:03:55.338 CC lib/util/xor.o 00:03:55.338 CC lib/util/zipf.o 00:03:55.338 CC lib/util/md5.o 00:03:55.338 CXX lib/trace_parser/trace.o 00:03:55.338 CC lib/dma/dma.o 00:03:55.338 CC lib/vfio_user/host/vfio_user_pci.o 00:03:55.338 CC lib/vfio_user/host/vfio_user.o 00:03:55.338 LIB libspdk_dma.a 00:03:55.338 SO libspdk_dma.so.5.0 00:03:55.338 LIB libspdk_ioat.a 00:03:55.338 SYMLINK libspdk_dma.so 00:03:55.338 SO libspdk_ioat.so.7.0 00:03:55.338 SYMLINK libspdk_ioat.so 00:03:55.338 LIB libspdk_vfio_user.a 00:03:55.338 SO libspdk_vfio_user.so.5.0 00:03:55.338 LIB libspdk_util.a 00:03:55.338 SYMLINK libspdk_vfio_user.so 00:03:55.338 SO libspdk_util.so.10.1 00:03:55.338 SYMLINK libspdk_util.so 00:03:55.338 LIB libspdk_trace_parser.a 00:03:55.338 SO libspdk_trace_parser.so.6.0 00:03:55.338 SYMLINK libspdk_trace_parser.so 00:03:55.338 CC lib/rdma_utils/rdma_utils.o 00:03:55.338 CC lib/idxd/idxd_user.o 00:03:55.338 CC lib/idxd/idxd.o 00:03:55.338 CC lib/idxd/idxd_kernel.o 00:03:55.338 CC lib/json/json_parse.o 00:03:55.338 CC lib/vmd/vmd.o 00:03:55.338 CC lib/env_dpdk/memory.o 00:03:55.338 CC lib/json/json_util.o 00:03:55.338 CC lib/env_dpdk/env.o 00:03:55.338 CC lib/vmd/led.o 00:03:55.338 CC lib/env_dpdk/init.o 00:03:55.338 CC lib/json/json_write.o 00:03:55.338 CC lib/env_dpdk/threads.o 00:03:55.338 CC lib/env_dpdk/pci.o 00:03:55.338 CC lib/env_dpdk/pci_ioat.o 00:03:55.338 CC lib/env_dpdk/pci_virtio.o 00:03:55.338 CC lib/env_dpdk/pci_event.o 00:03:55.338 CC lib/env_dpdk/pci_vmd.o 00:03:55.338 CC lib/env_dpdk/pci_idxd.o 00:03:55.338 CC lib/env_dpdk/sigbus_handler.o 00:03:55.338 CC lib/env_dpdk/pci_dpdk.o 00:03:55.338 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:55.338 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:55.338 CC lib/conf/conf.o 00:03:55.597 LIB libspdk_rdma_utils.a 00:03:55.597 LIB libspdk_conf.a 00:03:55.597 SO libspdk_rdma_utils.so.1.0 00:03:55.597 LIB libspdk_json.a 00:03:55.597 SO libspdk_conf.so.6.0 00:03:55.597 SYMLINK libspdk_rdma_utils.so 00:03:55.597 SO libspdk_json.so.6.0 00:03:55.597 SYMLINK libspdk_conf.so 00:03:55.856 SYMLINK libspdk_json.so 00:03:55.856 LIB libspdk_idxd.a 00:03:55.856 SO libspdk_idxd.so.12.1 00:03:55.856 LIB libspdk_vmd.a 00:03:55.856 SO libspdk_vmd.so.6.0 00:03:55.856 SYMLINK libspdk_idxd.so 00:03:55.856 CC lib/rdma_provider/common.o 00:03:55.856 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:55.856 SYMLINK libspdk_vmd.so 00:03:56.114 CC lib/jsonrpc/jsonrpc_client.o 00:03:56.115 CC lib/jsonrpc/jsonrpc_server.o 00:03:56.115 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:56.115 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:56.115 LIB libspdk_rdma_provider.a 00:03:56.115 SO libspdk_rdma_provider.so.7.0 00:03:56.115 SYMLINK libspdk_rdma_provider.so 00:03:56.115 LIB libspdk_jsonrpc.a 00:03:56.373 SO libspdk_jsonrpc.so.6.0 00:03:56.373 SYMLINK libspdk_jsonrpc.so 00:03:56.373 LIB libspdk_env_dpdk.a 00:03:56.373 SO libspdk_env_dpdk.so.15.1 00:03:56.373 SYMLINK libspdk_env_dpdk.so 00:03:56.632 CC lib/rpc/rpc.o 00:03:56.891 LIB libspdk_rpc.a 00:03:56.891 SO libspdk_rpc.so.6.0 00:03:56.891 SYMLINK libspdk_rpc.so 00:03:57.150 CC lib/notify/notify_rpc.o 00:03:57.150 CC lib/notify/notify.o 00:03:57.150 CC lib/keyring/keyring.o 00:03:57.150 CC lib/keyring/keyring_rpc.o 00:03:57.150 CC lib/trace/trace.o 00:03:57.150 CC lib/trace/trace_flags.o 00:03:57.150 CC lib/trace/trace_rpc.o 00:03:57.409 LIB libspdk_notify.a 00:03:57.409 SO libspdk_notify.so.6.0 00:03:57.409 LIB libspdk_keyring.a 00:03:57.409 SO libspdk_keyring.so.2.0 00:03:57.409 LIB libspdk_trace.a 00:03:57.409 SYMLINK libspdk_notify.so 00:03:57.409 SO libspdk_trace.so.11.0 00:03:57.409 SYMLINK libspdk_keyring.so 00:03:57.409 SYMLINK libspdk_trace.so 00:03:57.668 CC lib/thread/thread.o 00:03:57.668 CC lib/thread/iobuf.o 00:03:57.668 CC lib/sock/sock.o 00:03:57.668 CC lib/sock/sock_rpc.o 00:03:58.236 LIB libspdk_sock.a 00:03:58.236 SO libspdk_sock.so.10.0 00:03:58.236 SYMLINK libspdk_sock.so 00:03:58.495 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:58.495 CC lib/nvme/nvme_ctrlr.o 00:03:58.495 CC lib/nvme/nvme_ns_cmd.o 00:03:58.495 CC lib/nvme/nvme_fabric.o 00:03:58.495 CC lib/nvme/nvme_ns.o 00:03:58.495 CC lib/nvme/nvme_pcie_common.o 00:03:58.495 CC lib/nvme/nvme_pcie.o 00:03:58.495 CC lib/nvme/nvme_qpair.o 00:03:58.495 CC lib/nvme/nvme.o 00:03:58.495 CC lib/nvme/nvme_quirks.o 00:03:58.495 CC lib/nvme/nvme_transport.o 00:03:58.495 CC lib/nvme/nvme_discovery.o 00:03:58.495 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:58.495 CC lib/nvme/nvme_tcp.o 00:03:58.495 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:58.495 CC lib/nvme/nvme_opal.o 00:03:58.495 CC lib/nvme/nvme_io_msg.o 00:03:58.495 CC lib/nvme/nvme_poll_group.o 00:03:58.495 CC lib/nvme/nvme_zns.o 00:03:58.495 CC lib/nvme/nvme_stubs.o 00:03:58.495 CC lib/nvme/nvme_auth.o 00:03:58.495 CC lib/nvme/nvme_cuse.o 00:03:58.495 CC lib/nvme/nvme_rdma.o 00:03:58.753 LIB libspdk_thread.a 00:03:58.753 SO libspdk_thread.so.11.0 00:03:58.753 SYMLINK libspdk_thread.so 00:03:59.011 CC lib/accel/accel.o 00:03:59.011 CC lib/accel/accel_rpc.o 00:03:59.011 CC lib/accel/accel_sw.o 00:03:59.011 CC lib/init/json_config.o 00:03:59.011 CC lib/init/subsystem.o 00:03:59.011 CC lib/blob/zeroes.o 00:03:59.011 CC lib/blob/blobstore.o 00:03:59.011 CC lib/fsdev/fsdev.o 00:03:59.011 CC lib/init/subsystem_rpc.o 00:03:59.011 CC lib/blob/request.o 00:03:59.011 CC lib/fsdev/fsdev_io.o 00:03:59.011 CC lib/init/rpc.o 00:03:59.011 CC lib/fsdev/fsdev_rpc.o 00:03:59.011 CC lib/blob/blob_bs_dev.o 00:03:59.011 CC lib/virtio/virtio.o 00:03:59.011 CC lib/virtio/virtio_vhost_user.o 00:03:59.011 CC lib/virtio/virtio_vfio_user.o 00:03:59.011 CC lib/virtio/virtio_pci.o 00:03:59.270 LIB libspdk_init.a 00:03:59.270 SO libspdk_init.so.6.0 00:03:59.270 LIB libspdk_virtio.a 00:03:59.528 SYMLINK libspdk_init.so 00:03:59.528 SO libspdk_virtio.so.7.0 00:03:59.528 SYMLINK libspdk_virtio.so 00:03:59.528 LIB libspdk_fsdev.a 00:03:59.528 SO libspdk_fsdev.so.2.0 00:03:59.787 SYMLINK libspdk_fsdev.so 00:03:59.787 CC lib/event/log_rpc.o 00:03:59.787 CC lib/event/app.o 00:03:59.787 CC lib/event/reactor.o 00:03:59.787 CC lib/event/scheduler_static.o 00:03:59.787 CC lib/event/app_rpc.o 00:03:59.787 LIB libspdk_accel.a 00:03:59.787 SO libspdk_accel.so.16.0 00:04:00.046 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:00.046 LIB libspdk_nvme.a 00:04:00.046 SYMLINK libspdk_accel.so 00:04:00.046 LIB libspdk_event.a 00:04:00.046 SO libspdk_nvme.so.15.0 00:04:00.046 SO libspdk_event.so.14.0 00:04:00.046 SYMLINK libspdk_event.so 00:04:00.304 SYMLINK libspdk_nvme.so 00:04:00.304 CC lib/bdev/bdev.o 00:04:00.304 CC lib/bdev/bdev_rpc.o 00:04:00.304 CC lib/bdev/bdev_zone.o 00:04:00.304 CC lib/bdev/part.o 00:04:00.304 CC lib/bdev/scsi_nvme.o 00:04:00.304 LIB libspdk_fuse_dispatcher.a 00:04:00.304 SO libspdk_fuse_dispatcher.so.1.0 00:04:00.563 SYMLINK libspdk_fuse_dispatcher.so 00:04:01.130 LIB libspdk_blob.a 00:04:01.130 SO libspdk_blob.so.12.0 00:04:01.130 SYMLINK libspdk_blob.so 00:04:01.388 CC lib/blobfs/blobfs.o 00:04:01.388 CC lib/blobfs/tree.o 00:04:01.388 CC lib/lvol/lvol.o 00:04:01.956 LIB libspdk_bdev.a 00:04:01.956 LIB libspdk_blobfs.a 00:04:01.956 SO libspdk_blobfs.so.11.0 00:04:01.956 SO libspdk_bdev.so.17.0 00:04:01.956 SYMLINK libspdk_blobfs.so 00:04:01.956 SYMLINK libspdk_bdev.so 00:04:01.956 LIB libspdk_lvol.a 00:04:02.215 SO libspdk_lvol.so.11.0 00:04:02.215 SYMLINK libspdk_lvol.so 00:04:02.215 CC lib/nvmf/ctrlr.o 00:04:02.215 CC lib/nvmf/ctrlr_discovery.o 00:04:02.215 CC lib/nvmf/ctrlr_bdev.o 00:04:02.215 CC lib/nvmf/subsystem.o 00:04:02.215 CC lib/nvmf/nvmf_rpc.o 00:04:02.215 CC lib/nvmf/nvmf.o 00:04:02.215 CC lib/nvmf/tcp.o 00:04:02.215 CC lib/nvmf/transport.o 00:04:02.215 CC lib/nvmf/stubs.o 00:04:02.215 CC lib/nvmf/mdns_server.o 00:04:02.215 CC lib/nvmf/rdma.o 00:04:02.215 CC lib/nvmf/auth.o 00:04:02.215 CC lib/ftl/ftl_core.o 00:04:02.215 CC lib/ftl/ftl_init.o 00:04:02.215 CC lib/ftl/ftl_layout.o 00:04:02.215 CC lib/ftl/ftl_debug.o 00:04:02.215 CC lib/ftl/ftl_io.o 00:04:02.215 CC lib/ftl/ftl_sb.o 00:04:02.474 CC lib/ftl/ftl_l2p.o 00:04:02.474 CC lib/ftl/ftl_l2p_flat.o 00:04:02.474 CC lib/ftl/ftl_nv_cache.o 00:04:02.474 CC lib/ftl/ftl_band.o 00:04:02.474 CC lib/ftl/ftl_band_ops.o 00:04:02.474 CC lib/ftl/ftl_writer.o 00:04:02.474 CC lib/ftl/ftl_rq.o 00:04:02.474 CC lib/ftl/ftl_reloc.o 00:04:02.474 CC lib/scsi/dev.o 00:04:02.474 CC lib/scsi/lun.o 00:04:02.474 CC lib/ftl/ftl_l2p_cache.o 00:04:02.474 CC lib/scsi/port.o 00:04:02.474 CC lib/ftl/ftl_p2l.o 00:04:02.474 CC lib/scsi/scsi.o 00:04:02.474 CC lib/scsi/scsi_bdev.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt.o 00:04:02.474 CC lib/ftl/ftl_p2l_log.o 00:04:02.474 CC lib/scsi/scsi_rpc.o 00:04:02.474 CC lib/scsi/scsi_pr.o 00:04:02.474 CC lib/scsi/task.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:02.474 CC lib/nbd/nbd_rpc.o 00:04:02.474 CC lib/nbd/nbd.o 00:04:02.474 CC lib/ftl/utils/ftl_conf.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:02.474 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:02.474 CC lib/ftl/utils/ftl_md.o 00:04:02.474 CC lib/ftl/utils/ftl_mempool.o 00:04:02.474 CC lib/ftl/utils/ftl_bitmap.o 00:04:02.474 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:02.474 CC lib/ftl/utils/ftl_property.o 00:04:02.474 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:02.474 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:02.474 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:02.474 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:02.474 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:02.474 CC lib/ublk/ublk.o 00:04:02.474 CC lib/ublk/ublk_rpc.o 00:04:02.474 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:02.474 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:02.474 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:02.474 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:02.474 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:02.474 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:02.474 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:02.474 CC lib/ftl/ftl_trace.o 00:04:02.474 CC lib/ftl/base/ftl_base_dev.o 00:04:02.474 CC lib/ftl/base/ftl_base_bdev.o 00:04:02.733 LIB libspdk_nbd.a 00:04:02.991 SO libspdk_nbd.so.7.0 00:04:02.991 LIB libspdk_scsi.a 00:04:02.991 SYMLINK libspdk_nbd.so 00:04:02.991 SO libspdk_scsi.so.9.0 00:04:02.991 SYMLINK libspdk_scsi.so 00:04:02.991 LIB libspdk_ublk.a 00:04:02.991 SO libspdk_ublk.so.3.0 00:04:03.250 SYMLINK libspdk_ublk.so 00:04:03.250 CC lib/vhost/vhost.o 00:04:03.250 CC lib/vhost/vhost_rpc.o 00:04:03.250 CC lib/vhost/vhost_scsi.o 00:04:03.250 CC lib/vhost/rte_vhost_user.o 00:04:03.250 CC lib/vhost/vhost_blk.o 00:04:03.250 LIB libspdk_ftl.a 00:04:03.250 CC lib/iscsi/init_grp.o 00:04:03.250 CC lib/iscsi/conn.o 00:04:03.250 CC lib/iscsi/portal_grp.o 00:04:03.250 CC lib/iscsi/iscsi.o 00:04:03.250 CC lib/iscsi/param.o 00:04:03.250 CC lib/iscsi/iscsi_subsystem.o 00:04:03.250 CC lib/iscsi/tgt_node.o 00:04:03.250 CC lib/iscsi/iscsi_rpc.o 00:04:03.250 CC lib/iscsi/task.o 00:04:03.509 SO libspdk_ftl.so.9.0 00:04:03.768 SYMLINK libspdk_ftl.so 00:04:03.768 LIB libspdk_nvmf.a 00:04:04.027 SO libspdk_nvmf.so.20.0 00:04:04.027 LIB libspdk_vhost.a 00:04:04.027 SYMLINK libspdk_nvmf.so 00:04:04.027 SO libspdk_vhost.so.8.0 00:04:04.027 SYMLINK libspdk_vhost.so 00:04:04.286 LIB libspdk_iscsi.a 00:04:04.286 SO libspdk_iscsi.so.8.0 00:04:04.286 SYMLINK libspdk_iscsi.so 00:04:04.855 CC module/env_dpdk/env_dpdk_rpc.o 00:04:04.855 CC module/blob/bdev/blob_bdev.o 00:04:04.855 CC module/accel/iaa/accel_iaa.o 00:04:04.855 CC module/accel/iaa/accel_iaa_rpc.o 00:04:04.855 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:04.855 CC module/fsdev/aio/fsdev_aio.o 00:04:04.855 CC module/fsdev/aio/linux_aio_mgr.o 00:04:04.855 CC module/keyring/file/keyring_rpc.o 00:04:04.855 CC module/keyring/file/keyring.o 00:04:04.855 LIB libspdk_env_dpdk_rpc.a 00:04:05.113 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:05.113 CC module/keyring/linux/keyring.o 00:04:05.113 CC module/keyring/linux/keyring_rpc.o 00:04:05.113 CC module/accel/dsa/accel_dsa.o 00:04:05.113 CC module/accel/dsa/accel_dsa_rpc.o 00:04:05.113 CC module/accel/ioat/accel_ioat_rpc.o 00:04:05.113 CC module/scheduler/gscheduler/gscheduler.o 00:04:05.113 CC module/accel/ioat/accel_ioat.o 00:04:05.113 CC module/sock/posix/posix.o 00:04:05.113 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:05.113 CC module/accel/error/accel_error.o 00:04:05.113 CC module/accel/error/accel_error_rpc.o 00:04:05.113 SO libspdk_env_dpdk_rpc.so.6.0 00:04:05.113 SYMLINK libspdk_env_dpdk_rpc.so 00:04:05.113 LIB libspdk_keyring_file.a 00:04:05.113 LIB libspdk_keyring_linux.a 00:04:05.113 SO libspdk_keyring_file.so.2.0 00:04:05.113 LIB libspdk_accel_iaa.a 00:04:05.113 LIB libspdk_scheduler_gscheduler.a 00:04:05.113 LIB libspdk_scheduler_dpdk_governor.a 00:04:05.113 LIB libspdk_scheduler_dynamic.a 00:04:05.113 LIB libspdk_accel_ioat.a 00:04:05.113 SO libspdk_accel_iaa.so.3.0 00:04:05.113 SO libspdk_keyring_linux.so.1.0 00:04:05.113 SO libspdk_scheduler_gscheduler.so.4.0 00:04:05.113 SYMLINK libspdk_keyring_file.so 00:04:05.113 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:05.113 LIB libspdk_accel_error.a 00:04:05.113 SO libspdk_scheduler_dynamic.so.4.0 00:04:05.113 SO libspdk_accel_ioat.so.6.0 00:04:05.113 LIB libspdk_blob_bdev.a 00:04:05.113 SO libspdk_accel_error.so.2.0 00:04:05.114 SYMLINK libspdk_keyring_linux.so 00:04:05.114 SYMLINK libspdk_accel_iaa.so 00:04:05.114 SYMLINK libspdk_scheduler_gscheduler.so 00:04:05.114 LIB libspdk_accel_dsa.a 00:04:05.114 SYMLINK libspdk_scheduler_dynamic.so 00:04:05.114 SO libspdk_blob_bdev.so.12.0 00:04:05.114 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:05.373 SYMLINK libspdk_accel_ioat.so 00:04:05.373 SYMLINK libspdk_accel_error.so 00:04:05.373 SO libspdk_accel_dsa.so.5.0 00:04:05.373 SYMLINK libspdk_blob_bdev.so 00:04:05.373 SYMLINK libspdk_accel_dsa.so 00:04:05.373 LIB libspdk_fsdev_aio.a 00:04:05.373 SO libspdk_fsdev_aio.so.1.0 00:04:05.631 LIB libspdk_sock_posix.a 00:04:05.631 SO libspdk_sock_posix.so.6.0 00:04:05.631 SYMLINK libspdk_fsdev_aio.so 00:04:05.631 SYMLINK libspdk_sock_posix.so 00:04:05.631 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:05.631 CC module/blobfs/bdev/blobfs_bdev.o 00:04:05.631 CC module/bdev/lvol/vbdev_lvol.o 00:04:05.631 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:05.631 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:05.631 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:05.631 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:05.631 CC module/bdev/error/vbdev_error.o 00:04:05.631 CC module/bdev/error/vbdev_error_rpc.o 00:04:05.631 CC module/bdev/delay/vbdev_delay.o 00:04:05.631 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:05.631 CC module/bdev/malloc/bdev_malloc.o 00:04:05.631 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:05.631 CC module/bdev/passthru/vbdev_passthru.o 00:04:05.631 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:05.631 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:05.631 CC module/bdev/ftl/bdev_ftl.o 00:04:05.631 CC module/bdev/iscsi/bdev_iscsi.o 00:04:05.631 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:05.631 CC module/bdev/gpt/vbdev_gpt.o 00:04:05.631 CC module/bdev/split/vbdev_split.o 00:04:05.631 CC module/bdev/split/vbdev_split_rpc.o 00:04:05.631 CC module/bdev/aio/bdev_aio.o 00:04:05.631 CC module/bdev/gpt/gpt.o 00:04:05.631 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:05.631 CC module/bdev/aio/bdev_aio_rpc.o 00:04:05.631 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:05.890 CC module/bdev/null/bdev_null.o 00:04:05.890 CC module/bdev/null/bdev_null_rpc.o 00:04:05.890 CC module/bdev/nvme/bdev_nvme.o 00:04:05.890 CC module/bdev/nvme/bdev_mdns_client.o 00:04:05.890 CC module/bdev/nvme/nvme_rpc.o 00:04:05.890 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:05.890 CC module/bdev/nvme/vbdev_opal.o 00:04:05.890 CC module/bdev/raid/bdev_raid.o 00:04:05.890 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:05.890 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:05.890 CC module/bdev/raid/bdev_raid_rpc.o 00:04:05.890 CC module/bdev/raid/bdev_raid_sb.o 00:04:05.890 CC module/bdev/raid/raid0.o 00:04:05.890 CC module/bdev/raid/raid1.o 00:04:05.890 CC module/bdev/raid/concat.o 00:04:05.890 LIB libspdk_blobfs_bdev.a 00:04:05.890 SO libspdk_blobfs_bdev.so.6.0 00:04:05.890 SYMLINK libspdk_blobfs_bdev.so 00:04:06.148 LIB libspdk_bdev_ftl.a 00:04:06.148 LIB libspdk_bdev_split.a 00:04:06.148 LIB libspdk_bdev_passthru.a 00:04:06.148 LIB libspdk_bdev_error.a 00:04:06.148 SO libspdk_bdev_ftl.so.6.0 00:04:06.148 SO libspdk_bdev_passthru.so.6.0 00:04:06.148 LIB libspdk_bdev_null.a 00:04:06.148 LIB libspdk_bdev_gpt.a 00:04:06.148 SO libspdk_bdev_split.so.6.0 00:04:06.148 LIB libspdk_bdev_aio.a 00:04:06.148 SO libspdk_bdev_error.so.6.0 00:04:06.148 SO libspdk_bdev_null.so.6.0 00:04:06.148 SO libspdk_bdev_aio.so.6.0 00:04:06.148 SO libspdk_bdev_gpt.so.6.0 00:04:06.148 LIB libspdk_bdev_iscsi.a 00:04:06.148 LIB libspdk_bdev_delay.a 00:04:06.148 SYMLINK libspdk_bdev_ftl.so 00:04:06.148 LIB libspdk_bdev_malloc.a 00:04:06.148 LIB libspdk_bdev_zone_block.a 00:04:06.148 SYMLINK libspdk_bdev_passthru.so 00:04:06.148 SYMLINK libspdk_bdev_split.so 00:04:06.148 SYMLINK libspdk_bdev_null.so 00:04:06.148 SO libspdk_bdev_malloc.so.6.0 00:04:06.148 SYMLINK libspdk_bdev_error.so 00:04:06.148 SYMLINK libspdk_bdev_gpt.so 00:04:06.148 SO libspdk_bdev_delay.so.6.0 00:04:06.148 SO libspdk_bdev_iscsi.so.6.0 00:04:06.148 SO libspdk_bdev_zone_block.so.6.0 00:04:06.148 SYMLINK libspdk_bdev_aio.so 00:04:06.148 LIB libspdk_bdev_lvol.a 00:04:06.148 LIB libspdk_bdev_virtio.a 00:04:06.148 SYMLINK libspdk_bdev_malloc.so 00:04:06.148 SO libspdk_bdev_lvol.so.6.0 00:04:06.148 SYMLINK libspdk_bdev_iscsi.so 00:04:06.148 SO libspdk_bdev_virtio.so.6.0 00:04:06.148 SYMLINK libspdk_bdev_delay.so 00:04:06.148 SYMLINK libspdk_bdev_zone_block.so 00:04:06.148 SYMLINK libspdk_bdev_lvol.so 00:04:06.148 SYMLINK libspdk_bdev_virtio.so 00:04:06.406 LIB libspdk_bdev_raid.a 00:04:06.664 SO libspdk_bdev_raid.so.6.0 00:04:06.664 SYMLINK libspdk_bdev_raid.so 00:04:07.599 LIB libspdk_bdev_nvme.a 00:04:07.599 SO libspdk_bdev_nvme.so.7.1 00:04:07.599 SYMLINK libspdk_bdev_nvme.so 00:04:08.165 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:08.165 CC module/event/subsystems/vmd/vmd.o 00:04:08.165 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:08.165 CC module/event/subsystems/sock/sock.o 00:04:08.165 CC module/event/subsystems/keyring/keyring.o 00:04:08.165 CC module/event/subsystems/iobuf/iobuf.o 00:04:08.165 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:08.165 CC module/event/subsystems/fsdev/fsdev.o 00:04:08.165 CC module/event/subsystems/scheduler/scheduler.o 00:04:08.165 LIB libspdk_event_vhost_blk.a 00:04:08.423 LIB libspdk_event_keyring.a 00:04:08.423 LIB libspdk_event_vmd.a 00:04:08.423 SO libspdk_event_keyring.so.1.0 00:04:08.423 SO libspdk_event_vhost_blk.so.3.0 00:04:08.423 LIB libspdk_event_sock.a 00:04:08.423 SO libspdk_event_vmd.so.6.0 00:04:08.423 LIB libspdk_event_iobuf.a 00:04:08.423 LIB libspdk_event_fsdev.a 00:04:08.423 LIB libspdk_event_scheduler.a 00:04:08.423 SYMLINK libspdk_event_vhost_blk.so 00:04:08.423 SO libspdk_event_sock.so.5.0 00:04:08.423 SO libspdk_event_iobuf.so.3.0 00:04:08.423 SO libspdk_event_fsdev.so.1.0 00:04:08.423 SYMLINK libspdk_event_vmd.so 00:04:08.423 SYMLINK libspdk_event_keyring.so 00:04:08.423 SO libspdk_event_scheduler.so.4.0 00:04:08.423 SYMLINK libspdk_event_iobuf.so 00:04:08.423 SYMLINK libspdk_event_sock.so 00:04:08.423 SYMLINK libspdk_event_fsdev.so 00:04:08.423 SYMLINK libspdk_event_scheduler.so 00:04:08.681 CC module/event/subsystems/accel/accel.o 00:04:08.939 LIB libspdk_event_accel.a 00:04:08.939 SO libspdk_event_accel.so.6.0 00:04:08.939 SYMLINK libspdk_event_accel.so 00:04:09.197 CC module/event/subsystems/bdev/bdev.o 00:04:09.456 LIB libspdk_event_bdev.a 00:04:09.456 SO libspdk_event_bdev.so.6.0 00:04:09.456 SYMLINK libspdk_event_bdev.so 00:04:09.715 CC module/event/subsystems/scsi/scsi.o 00:04:09.715 CC module/event/subsystems/nbd/nbd.o 00:04:09.715 CC module/event/subsystems/ublk/ublk.o 00:04:09.715 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:09.715 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:09.973 LIB libspdk_event_nbd.a 00:04:09.973 LIB libspdk_event_scsi.a 00:04:09.973 LIB libspdk_event_ublk.a 00:04:09.973 SO libspdk_event_nbd.so.6.0 00:04:09.973 SO libspdk_event_scsi.so.6.0 00:04:09.973 SO libspdk_event_ublk.so.3.0 00:04:09.973 LIB libspdk_event_nvmf.a 00:04:09.973 SYMLINK libspdk_event_scsi.so 00:04:09.973 SYMLINK libspdk_event_nbd.so 00:04:09.973 SYMLINK libspdk_event_ublk.so 00:04:09.973 SO libspdk_event_nvmf.so.6.0 00:04:09.973 SYMLINK libspdk_event_nvmf.so 00:04:10.231 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:10.231 CC module/event/subsystems/iscsi/iscsi.o 00:04:10.489 LIB libspdk_event_vhost_scsi.a 00:04:10.489 SO libspdk_event_vhost_scsi.so.3.0 00:04:10.489 LIB libspdk_event_iscsi.a 00:04:10.489 SO libspdk_event_iscsi.so.6.0 00:04:10.489 SYMLINK libspdk_event_vhost_scsi.so 00:04:10.489 SYMLINK libspdk_event_iscsi.so 00:04:10.748 SO libspdk.so.6.0 00:04:10.748 SYMLINK libspdk.so 00:04:11.005 CC app/trace_record/trace_record.o 00:04:11.005 CC app/spdk_nvme_identify/identify.o 00:04:11.005 CC app/spdk_lspci/spdk_lspci.o 00:04:11.005 CXX app/trace/trace.o 00:04:11.005 CC app/spdk_top/spdk_top.o 00:04:11.005 CC app/spdk_nvme_perf/perf.o 00:04:11.005 CC test/rpc_client/rpc_client_test.o 00:04:11.005 CC app/spdk_nvme_discover/discovery_aer.o 00:04:11.006 TEST_HEADER include/spdk/accel_module.h 00:04:11.006 TEST_HEADER include/spdk/accel.h 00:04:11.006 TEST_HEADER include/spdk/assert.h 00:04:11.006 TEST_HEADER include/spdk/base64.h 00:04:11.006 TEST_HEADER include/spdk/bdev.h 00:04:11.006 TEST_HEADER include/spdk/barrier.h 00:04:11.006 TEST_HEADER include/spdk/bit_array.h 00:04:11.006 TEST_HEADER include/spdk/bdev_module.h 00:04:11.006 TEST_HEADER include/spdk/bdev_zone.h 00:04:11.006 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:11.006 TEST_HEADER include/spdk/bit_pool.h 00:04:11.006 TEST_HEADER include/spdk/blob_bdev.h 00:04:11.006 TEST_HEADER include/spdk/blobfs.h 00:04:11.006 TEST_HEADER include/spdk/blob.h 00:04:11.006 TEST_HEADER include/spdk/conf.h 00:04:11.006 TEST_HEADER include/spdk/cpuset.h 00:04:11.006 CC app/spdk_dd/spdk_dd.o 00:04:11.006 CC app/iscsi_tgt/iscsi_tgt.o 00:04:11.006 TEST_HEADER include/spdk/config.h 00:04:11.006 TEST_HEADER include/spdk/crc16.h 00:04:11.006 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:11.006 TEST_HEADER include/spdk/crc32.h 00:04:11.006 TEST_HEADER include/spdk/crc64.h 00:04:11.006 TEST_HEADER include/spdk/dma.h 00:04:11.006 TEST_HEADER include/spdk/dif.h 00:04:11.006 TEST_HEADER include/spdk/env_dpdk.h 00:04:11.006 TEST_HEADER include/spdk/endian.h 00:04:11.006 TEST_HEADER include/spdk/env.h 00:04:11.006 CC app/spdk_tgt/spdk_tgt.o 00:04:11.006 TEST_HEADER include/spdk/event.h 00:04:11.006 TEST_HEADER include/spdk/fd_group.h 00:04:11.006 TEST_HEADER include/spdk/fd.h 00:04:11.006 TEST_HEADER include/spdk/fsdev.h 00:04:11.006 TEST_HEADER include/spdk/file.h 00:04:11.006 TEST_HEADER include/spdk/fsdev_module.h 00:04:11.006 TEST_HEADER include/spdk/ftl.h 00:04:11.006 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:11.006 TEST_HEADER include/spdk/hexlify.h 00:04:11.006 TEST_HEADER include/spdk/gpt_spec.h 00:04:11.006 CC app/nvmf_tgt/nvmf_main.o 00:04:11.006 TEST_HEADER include/spdk/histogram_data.h 00:04:11.006 TEST_HEADER include/spdk/init.h 00:04:11.006 TEST_HEADER include/spdk/idxd_spec.h 00:04:11.006 TEST_HEADER include/spdk/ioat.h 00:04:11.006 TEST_HEADER include/spdk/idxd.h 00:04:11.006 TEST_HEADER include/spdk/ioat_spec.h 00:04:11.006 TEST_HEADER include/spdk/iscsi_spec.h 00:04:11.006 TEST_HEADER include/spdk/jsonrpc.h 00:04:11.006 TEST_HEADER include/spdk/json.h 00:04:11.006 TEST_HEADER include/spdk/keyring_module.h 00:04:11.006 TEST_HEADER include/spdk/keyring.h 00:04:11.006 TEST_HEADER include/spdk/lvol.h 00:04:11.006 TEST_HEADER include/spdk/md5.h 00:04:11.006 TEST_HEADER include/spdk/likely.h 00:04:11.006 TEST_HEADER include/spdk/log.h 00:04:11.006 TEST_HEADER include/spdk/memory.h 00:04:11.006 TEST_HEADER include/spdk/nbd.h 00:04:11.006 TEST_HEADER include/spdk/mmio.h 00:04:11.006 TEST_HEADER include/spdk/notify.h 00:04:11.006 TEST_HEADER include/spdk/net.h 00:04:11.006 TEST_HEADER include/spdk/nvme.h 00:04:11.006 TEST_HEADER include/spdk/nvme_intel.h 00:04:11.006 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:11.006 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:11.006 TEST_HEADER include/spdk/nvme_spec.h 00:04:11.006 TEST_HEADER include/spdk/nvme_zns.h 00:04:11.006 TEST_HEADER include/spdk/nvmf.h 00:04:11.006 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:11.006 TEST_HEADER include/spdk/nvmf_transport.h 00:04:11.006 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:11.006 TEST_HEADER include/spdk/nvmf_spec.h 00:04:11.006 TEST_HEADER include/spdk/opal.h 00:04:11.006 TEST_HEADER include/spdk/pipe.h 00:04:11.006 TEST_HEADER include/spdk/opal_spec.h 00:04:11.006 TEST_HEADER include/spdk/pci_ids.h 00:04:11.006 TEST_HEADER include/spdk/queue.h 00:04:11.006 TEST_HEADER include/spdk/rpc.h 00:04:11.006 TEST_HEADER include/spdk/reduce.h 00:04:11.006 TEST_HEADER include/spdk/scsi.h 00:04:11.006 TEST_HEADER include/spdk/scheduler.h 00:04:11.006 TEST_HEADER include/spdk/sock.h 00:04:11.006 TEST_HEADER include/spdk/scsi_spec.h 00:04:11.006 TEST_HEADER include/spdk/stdinc.h 00:04:11.006 TEST_HEADER include/spdk/string.h 00:04:11.006 TEST_HEADER include/spdk/thread.h 00:04:11.006 TEST_HEADER include/spdk/trace_parser.h 00:04:11.006 TEST_HEADER include/spdk/trace.h 00:04:11.006 TEST_HEADER include/spdk/tree.h 00:04:11.006 TEST_HEADER include/spdk/util.h 00:04:11.006 TEST_HEADER include/spdk/ublk.h 00:04:11.006 TEST_HEADER include/spdk/version.h 00:04:11.006 TEST_HEADER include/spdk/uuid.h 00:04:11.006 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:11.006 TEST_HEADER include/spdk/vhost.h 00:04:11.006 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:11.006 TEST_HEADER include/spdk/xor.h 00:04:11.006 TEST_HEADER include/spdk/vmd.h 00:04:11.006 TEST_HEADER include/spdk/zipf.h 00:04:11.006 CXX test/cpp_headers/accel.o 00:04:11.006 CXX test/cpp_headers/accel_module.o 00:04:11.006 CXX test/cpp_headers/barrier.o 00:04:11.006 CXX test/cpp_headers/assert.o 00:04:11.006 CXX test/cpp_headers/base64.o 00:04:11.006 CXX test/cpp_headers/bdev_module.o 00:04:11.006 CXX test/cpp_headers/bdev.o 00:04:11.006 CXX test/cpp_headers/bit_array.o 00:04:11.006 CXX test/cpp_headers/bit_pool.o 00:04:11.006 CXX test/cpp_headers/bdev_zone.o 00:04:11.006 CXX test/cpp_headers/blobfs.o 00:04:11.006 CXX test/cpp_headers/blob_bdev.o 00:04:11.006 CXX test/cpp_headers/blob.o 00:04:11.006 CXX test/cpp_headers/conf.o 00:04:11.006 CXX test/cpp_headers/blobfs_bdev.o 00:04:11.006 CXX test/cpp_headers/cpuset.o 00:04:11.006 CXX test/cpp_headers/config.o 00:04:11.006 CXX test/cpp_headers/crc32.o 00:04:11.006 CXX test/cpp_headers/crc64.o 00:04:11.006 CXX test/cpp_headers/crc16.o 00:04:11.006 CXX test/cpp_headers/dma.o 00:04:11.006 CXX test/cpp_headers/endian.o 00:04:11.006 CXX test/cpp_headers/env_dpdk.o 00:04:11.006 CXX test/cpp_headers/dif.o 00:04:11.006 CXX test/cpp_headers/env.o 00:04:11.006 CXX test/cpp_headers/event.o 00:04:11.006 CXX test/cpp_headers/file.o 00:04:11.006 CXX test/cpp_headers/fd_group.o 00:04:11.006 CXX test/cpp_headers/fd.o 00:04:11.006 CXX test/cpp_headers/fsdev.o 00:04:11.006 CXX test/cpp_headers/fsdev_module.o 00:04:11.006 CXX test/cpp_headers/ftl.o 00:04:11.006 CXX test/cpp_headers/hexlify.o 00:04:11.006 CXX test/cpp_headers/gpt_spec.o 00:04:11.006 CXX test/cpp_headers/fuse_dispatcher.o 00:04:11.006 CXX test/cpp_headers/idxd.o 00:04:11.006 CXX test/cpp_headers/histogram_data.o 00:04:11.006 CXX test/cpp_headers/init.o 00:04:11.006 CXX test/cpp_headers/ioat.o 00:04:11.006 CXX test/cpp_headers/idxd_spec.o 00:04:11.006 CXX test/cpp_headers/ioat_spec.o 00:04:11.006 CXX test/cpp_headers/iscsi_spec.o 00:04:11.278 CXX test/cpp_headers/json.o 00:04:11.278 CXX test/cpp_headers/jsonrpc.o 00:04:11.278 CXX test/cpp_headers/keyring.o 00:04:11.278 CXX test/cpp_headers/keyring_module.o 00:04:11.278 CXX test/cpp_headers/likely.o 00:04:11.278 CXX test/cpp_headers/lvol.o 00:04:11.278 CXX test/cpp_headers/log.o 00:04:11.278 CXX test/cpp_headers/md5.o 00:04:11.278 CXX test/cpp_headers/memory.o 00:04:11.278 CXX test/cpp_headers/mmio.o 00:04:11.278 CXX test/cpp_headers/net.o 00:04:11.278 CXX test/cpp_headers/nbd.o 00:04:11.278 CXX test/cpp_headers/notify.o 00:04:11.278 CXX test/cpp_headers/nvme_intel.o 00:04:11.278 CXX test/cpp_headers/nvme.o 00:04:11.278 CXX test/cpp_headers/nvme_ocssd.o 00:04:11.278 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:11.278 CXX test/cpp_headers/nvme_spec.o 00:04:11.278 CXX test/cpp_headers/nvme_zns.o 00:04:11.278 CXX test/cpp_headers/nvmf_cmd.o 00:04:11.278 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:11.278 CXX test/cpp_headers/nvmf.o 00:04:11.278 CXX test/cpp_headers/nvmf_transport.o 00:04:11.278 CXX test/cpp_headers/nvmf_spec.o 00:04:11.278 CXX test/cpp_headers/opal.o 00:04:11.278 CXX test/cpp_headers/opal_spec.o 00:04:11.278 CXX test/cpp_headers/pci_ids.o 00:04:11.278 CXX test/cpp_headers/pipe.o 00:04:11.278 CXX test/cpp_headers/rpc.o 00:04:11.278 CXX test/cpp_headers/reduce.o 00:04:11.278 CXX test/cpp_headers/queue.o 00:04:11.278 CXX test/cpp_headers/scheduler.o 00:04:11.278 CXX test/cpp_headers/scsi.o 00:04:11.278 CXX test/cpp_headers/scsi_spec.o 00:04:11.278 CC examples/util/zipf/zipf.o 00:04:11.278 CXX test/cpp_headers/sock.o 00:04:11.278 CXX test/cpp_headers/stdinc.o 00:04:11.278 CXX test/cpp_headers/string.o 00:04:11.278 LINK spdk_lspci 00:04:11.278 CXX test/cpp_headers/thread.o 00:04:11.278 CXX test/cpp_headers/trace.o 00:04:11.278 CXX test/cpp_headers/trace_parser.o 00:04:11.278 CXX test/cpp_headers/tree.o 00:04:11.278 CC app/fio/nvme/fio_plugin.o 00:04:11.278 CC examples/ioat/perf/perf.o 00:04:11.278 CC test/app/histogram_perf/histogram_perf.o 00:04:11.278 CC examples/ioat/verify/verify.o 00:04:11.278 CC test/env/memory/memory_ut.o 00:04:11.278 CC test/thread/poller_perf/poller_perf.o 00:04:11.278 CC test/env/vtophys/vtophys.o 00:04:11.278 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:11.278 CC test/app/stub/stub.o 00:04:11.278 CC test/app/jsoncat/jsoncat.o 00:04:11.278 CC app/fio/bdev/fio_plugin.o 00:04:11.278 CC test/env/pci/pci_ut.o 00:04:11.278 CC test/dma/test_dma/test_dma.o 00:04:11.278 CC test/app/bdev_svc/bdev_svc.o 00:04:11.278 CXX test/cpp_headers/ublk.o 00:04:11.278 LINK spdk_nvme_discover 00:04:11.278 LINK rpc_client_test 00:04:11.543 LINK nvmf_tgt 00:04:11.544 LINK spdk_tgt 00:04:11.544 LINK spdk_trace_record 00:04:11.544 LINK interrupt_tgt 00:04:11.803 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:11.803 LINK histogram_perf 00:04:11.803 LINK poller_perf 00:04:11.803 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:11.803 CXX test/cpp_headers/util.o 00:04:11.803 CC test/env/mem_callbacks/mem_callbacks.o 00:04:11.803 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:11.803 CXX test/cpp_headers/uuid.o 00:04:11.803 LINK iscsi_tgt 00:04:11.803 CXX test/cpp_headers/version.o 00:04:11.803 CXX test/cpp_headers/vfio_user_pci.o 00:04:11.803 CXX test/cpp_headers/vfio_user_spec.o 00:04:11.804 CXX test/cpp_headers/vhost.o 00:04:11.804 CXX test/cpp_headers/vmd.o 00:04:11.804 CXX test/cpp_headers/xor.o 00:04:11.804 CXX test/cpp_headers/zipf.o 00:04:11.804 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:11.804 LINK stub 00:04:11.804 LINK verify 00:04:11.804 LINK ioat_perf 00:04:11.804 LINK bdev_svc 00:04:11.804 LINK zipf 00:04:11.804 LINK jsoncat 00:04:11.804 LINK vtophys 00:04:11.804 LINK spdk_trace 00:04:11.804 LINK env_dpdk_post_init 00:04:12.062 LINK spdk_dd 00:04:12.062 LINK pci_ut 00:04:12.062 LINK spdk_bdev 00:04:12.321 LINK test_dma 00:04:12.321 LINK spdk_nvme 00:04:12.321 LINK vhost_fuzz 00:04:12.321 LINK nvme_fuzz 00:04:12.321 CC app/vhost/vhost.o 00:04:12.321 LINK spdk_nvme_identify 00:04:12.321 CC test/event/app_repeat/app_repeat.o 00:04:12.321 CC test/event/event_perf/event_perf.o 00:04:12.321 CC test/event/reactor_perf/reactor_perf.o 00:04:12.321 CC test/event/reactor/reactor.o 00:04:12.321 CC examples/vmd/led/led.o 00:04:12.321 LINK spdk_nvme_perf 00:04:12.321 LINK spdk_top 00:04:12.321 CC test/event/scheduler/scheduler.o 00:04:12.321 CC examples/sock/hello_world/hello_sock.o 00:04:12.321 CC examples/vmd/lsvmd/lsvmd.o 00:04:12.321 CC examples/idxd/perf/perf.o 00:04:12.321 CC examples/thread/thread/thread_ex.o 00:04:12.321 LINK mem_callbacks 00:04:12.321 LINK reactor 00:04:12.321 LINK app_repeat 00:04:12.321 LINK event_perf 00:04:12.321 LINK led 00:04:12.321 LINK vhost 00:04:12.578 LINK reactor_perf 00:04:12.578 LINK lsvmd 00:04:12.578 LINK memory_ut 00:04:12.578 LINK hello_sock 00:04:12.578 LINK scheduler 00:04:12.578 LINK thread 00:04:12.578 LINK idxd_perf 00:04:12.578 CC test/nvme/aer/aer.o 00:04:12.579 CC test/nvme/boot_partition/boot_partition.o 00:04:12.579 CC test/nvme/reset/reset.o 00:04:12.579 CC test/nvme/connect_stress/connect_stress.o 00:04:12.579 CC test/nvme/overhead/overhead.o 00:04:12.579 CC test/nvme/sgl/sgl.o 00:04:12.579 CC test/nvme/err_injection/err_injection.o 00:04:12.579 CC test/nvme/fdp/fdp.o 00:04:12.579 CC test/nvme/startup/startup.o 00:04:12.579 CC test/nvme/simple_copy/simple_copy.o 00:04:12.579 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:12.579 CC test/nvme/reserve/reserve.o 00:04:12.579 CC test/nvme/fused_ordering/fused_ordering.o 00:04:12.579 CC test/nvme/e2edp/nvme_dp.o 00:04:12.579 CC test/nvme/cuse/cuse.o 00:04:12.579 CC test/accel/dif/dif.o 00:04:12.579 CC test/nvme/compliance/nvme_compliance.o 00:04:12.579 CC test/blobfs/mkfs/mkfs.o 00:04:12.838 CC test/lvol/esnap/esnap.o 00:04:12.838 LINK startup 00:04:12.838 LINK boot_partition 00:04:12.838 LINK err_injection 00:04:12.838 LINK connect_stress 00:04:12.838 LINK doorbell_aers 00:04:12.838 LINK reserve 00:04:12.838 LINK fused_ordering 00:04:12.838 LINK mkfs 00:04:12.838 LINK aer 00:04:12.838 LINK reset 00:04:12.838 LINK simple_copy 00:04:12.838 LINK sgl 00:04:12.838 LINK overhead 00:04:12.838 LINK nvme_dp 00:04:12.838 LINK fdp 00:04:12.838 LINK nvme_compliance 00:04:12.838 CC examples/nvme/hotplug/hotplug.o 00:04:12.838 CC examples/nvme/reconnect/reconnect.o 00:04:13.096 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:13.096 CC examples/nvme/abort/abort.o 00:04:13.096 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:13.096 CC examples/nvme/hello_world/hello_world.o 00:04:13.096 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:13.096 CC examples/nvme/arbitration/arbitration.o 00:04:13.096 LINK iscsi_fuzz 00:04:13.096 CC examples/accel/perf/accel_perf.o 00:04:13.096 CC examples/blob/cli/blobcli.o 00:04:13.096 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:13.096 CC examples/blob/hello_world/hello_blob.o 00:04:13.096 LINK cmb_copy 00:04:13.096 LINK pmr_persistence 00:04:13.096 LINK hotplug 00:04:13.096 LINK dif 00:04:13.096 LINK hello_world 00:04:13.355 LINK reconnect 00:04:13.355 LINK arbitration 00:04:13.355 LINK abort 00:04:13.355 LINK hello_blob 00:04:13.355 LINK hello_fsdev 00:04:13.355 LINK nvme_manage 00:04:13.355 LINK accel_perf 00:04:13.355 LINK blobcli 00:04:13.613 LINK cuse 00:04:13.613 CC test/bdev/bdevio/bdevio.o 00:04:13.872 CC examples/bdev/hello_world/hello_bdev.o 00:04:13.872 CC examples/bdev/bdevperf/bdevperf.o 00:04:13.872 LINK bdevio 00:04:14.131 LINK hello_bdev 00:04:14.391 LINK bdevperf 00:04:14.959 CC examples/nvmf/nvmf/nvmf.o 00:04:15.217 LINK nvmf 00:04:16.154 LINK esnap 00:04:16.154 00:04:16.154 real 0m50.373s 00:04:16.154 user 7m2.682s 00:04:16.154 sys 3m33.673s 00:04:16.154 03:52:10 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:16.154 03:52:10 make -- common/autotest_common.sh@10 -- $ set +x 00:04:16.154 ************************************ 00:04:16.154 END TEST make 00:04:16.154 ************************************ 00:04:16.414 03:52:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:16.414 03:52:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:16.414 03:52:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:16.414 03:52:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.414 03:52:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:16.414 03:52:10 -- pm/common@44 -- $ pid=504219 00:04:16.414 03:52:10 -- pm/common@50 -- $ kill -TERM 504219 00:04:16.414 03:52:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.414 03:52:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:16.414 03:52:10 -- pm/common@44 -- $ pid=504220 00:04:16.414 03:52:10 -- pm/common@50 -- $ kill -TERM 504220 00:04:16.414 03:52:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.414 03:52:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:16.414 03:52:10 -- pm/common@44 -- $ pid=504222 00:04:16.414 03:52:10 -- pm/common@50 -- $ kill -TERM 504222 00:04:16.414 03:52:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.414 03:52:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:16.414 03:52:10 -- pm/common@44 -- $ pid=504251 00:04:16.414 03:52:10 -- pm/common@50 -- $ sudo -E kill -TERM 504251 00:04:16.414 03:52:10 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:16.414 03:52:10 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:04:16.414 03:52:10 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:16.414 03:52:10 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:16.414 03:52:10 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:16.414 03:52:10 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:16.414 03:52:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.414 03:52:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.414 03:52:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.414 03:52:10 -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.414 03:52:10 -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.414 03:52:10 -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.414 03:52:10 -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.414 03:52:10 -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.414 03:52:10 -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.414 03:52:10 -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.414 03:52:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.414 03:52:10 -- scripts/common.sh@344 -- # case "$op" in 00:04:16.414 03:52:10 -- scripts/common.sh@345 -- # : 1 00:04:16.414 03:52:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.414 03:52:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.414 03:52:10 -- scripts/common.sh@365 -- # decimal 1 00:04:16.414 03:52:10 -- scripts/common.sh@353 -- # local d=1 00:04:16.414 03:52:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.414 03:52:10 -- scripts/common.sh@355 -- # echo 1 00:04:16.414 03:52:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.414 03:52:10 -- scripts/common.sh@366 -- # decimal 2 00:04:16.414 03:52:10 -- scripts/common.sh@353 -- # local d=2 00:04:16.414 03:52:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.414 03:52:10 -- scripts/common.sh@355 -- # echo 2 00:04:16.414 03:52:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.414 03:52:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.414 03:52:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.414 03:52:10 -- scripts/common.sh@368 -- # return 0 00:04:16.414 03:52:10 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.414 03:52:10 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:16.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.414 --rc genhtml_branch_coverage=1 00:04:16.414 --rc genhtml_function_coverage=1 00:04:16.414 --rc genhtml_legend=1 00:04:16.414 --rc geninfo_all_blocks=1 00:04:16.414 --rc geninfo_unexecuted_blocks=1 00:04:16.414 00:04:16.414 ' 00:04:16.414 03:52:10 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:16.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.414 --rc genhtml_branch_coverage=1 00:04:16.414 --rc genhtml_function_coverage=1 00:04:16.414 --rc genhtml_legend=1 00:04:16.414 --rc geninfo_all_blocks=1 00:04:16.414 --rc geninfo_unexecuted_blocks=1 00:04:16.414 00:04:16.414 ' 00:04:16.414 03:52:10 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:16.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.414 --rc genhtml_branch_coverage=1 00:04:16.414 --rc genhtml_function_coverage=1 00:04:16.414 --rc genhtml_legend=1 00:04:16.414 --rc geninfo_all_blocks=1 00:04:16.414 --rc geninfo_unexecuted_blocks=1 00:04:16.414 00:04:16.414 ' 00:04:16.414 03:52:10 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:16.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.414 --rc genhtml_branch_coverage=1 00:04:16.414 --rc genhtml_function_coverage=1 00:04:16.414 --rc genhtml_legend=1 00:04:16.414 --rc geninfo_all_blocks=1 00:04:16.414 --rc geninfo_unexecuted_blocks=1 00:04:16.414 00:04:16.414 ' 00:04:16.414 03:52:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:16.414 03:52:10 -- nvmf/common.sh@7 -- # uname -s 00:04:16.414 03:52:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:16.414 03:52:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:16.414 03:52:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:16.414 03:52:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:16.414 03:52:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:16.414 03:52:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:16.414 03:52:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:16.414 03:52:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:16.414 03:52:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:16.414 03:52:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:16.414 03:52:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:04:16.414 03:52:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:04:16.414 03:52:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:16.414 03:52:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:16.414 03:52:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:16.414 03:52:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:16.414 03:52:10 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:16.414 03:52:10 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:16.414 03:52:10 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:16.414 03:52:10 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:16.414 03:52:10 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:16.414 03:52:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.414 03:52:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.414 03:52:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.414 03:52:10 -- paths/export.sh@5 -- # export PATH 00:04:16.414 03:52:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.414 03:52:10 -- nvmf/common.sh@51 -- # : 0 00:04:16.414 03:52:10 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:16.414 03:52:10 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:16.414 03:52:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:16.414 03:52:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:16.415 03:52:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:16.415 03:52:10 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:16.415 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:16.415 03:52:10 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:16.415 03:52:10 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:16.415 03:52:10 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:16.415 03:52:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:16.415 03:52:10 -- spdk/autotest.sh@32 -- # uname -s 00:04:16.415 03:52:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:16.415 03:52:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:16.415 03:52:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:04:16.415 03:52:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:16.415 03:52:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:04:16.415 03:52:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:16.674 03:52:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:16.674 03:52:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:16.674 03:52:10 -- spdk/autotest.sh@48 -- # udevadm_pid=566940 00:04:16.674 03:52:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:16.674 03:52:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:16.674 03:52:10 -- pm/common@17 -- # local monitor 00:04:16.674 03:52:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.674 03:52:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.674 03:52:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.674 03:52:10 -- pm/common@21 -- # date +%s 00:04:16.674 03:52:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.674 03:52:10 -- pm/common@21 -- # date +%s 00:04:16.674 03:52:10 -- pm/common@25 -- # sleep 1 00:04:16.674 03:52:10 -- pm/common@21 -- # date +%s 00:04:16.674 03:52:10 -- pm/common@21 -- # date +%s 00:04:16.674 03:52:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733799130 00:04:16.674 03:52:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733799130 00:04:16.674 03:52:10 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733799130 00:04:16.674 03:52:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733799130 00:04:16.674 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733799130_collect-cpu-temp.pm.log 00:04:16.674 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733799130_collect-cpu-load.pm.log 00:04:16.674 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733799130_collect-vmstat.pm.log 00:04:16.674 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733799130_collect-bmc-pm.bmc.pm.log 00:04:17.612 03:52:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:17.612 03:52:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:17.612 03:52:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.612 03:52:11 -- common/autotest_common.sh@10 -- # set +x 00:04:17.612 03:52:11 -- spdk/autotest.sh@59 -- # create_test_list 00:04:17.612 03:52:11 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:17.612 03:52:11 -- common/autotest_common.sh@10 -- # set +x 00:04:17.612 03:52:11 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:04:17.612 03:52:11 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:17.612 03:52:11 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:17.612 03:52:11 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:04:17.612 03:52:11 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:17.612 03:52:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:17.612 03:52:11 -- common/autotest_common.sh@1457 -- # uname 00:04:17.612 03:52:11 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:17.612 03:52:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:17.612 03:52:11 -- common/autotest_common.sh@1477 -- # uname 00:04:17.612 03:52:11 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:17.612 03:52:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:17.612 03:52:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:17.612 lcov: LCOV version 1.15 00:04:17.612 03:52:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:04:27.592 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:27.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:37.688 03:52:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:37.688 03:52:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.688 03:52:32 -- common/autotest_common.sh@10 -- # set +x 00:04:37.947 03:52:32 -- spdk/autotest.sh@78 -- # rm -f 00:04:37.947 03:52:32 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:40.483 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:40.483 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:40.483 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:40.483 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:40.483 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:40.743 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:40.743 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:40.743 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:40.743 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:40.743 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:40.743 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:40.743 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:40.743 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:40.743 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:40.743 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:40.743 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:40.743 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:04:42.647 03:52:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:42.647 03:52:36 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:42.647 03:52:36 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:42.647 03:52:36 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:42.647 03:52:36 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:42.647 03:52:36 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:42.647 03:52:36 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:42.647 03:52:36 -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:04:42.647 03:52:36 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:42.647 03:52:36 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:42.647 03:52:36 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:42.647 03:52:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:42.647 03:52:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:42.647 03:52:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:42.647 03:52:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:42.647 03:52:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:42.647 03:52:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:42.647 03:52:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:42.647 03:52:36 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:42.647 No valid GPT data, bailing 00:04:42.647 03:52:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:42.647 03:52:36 -- scripts/common.sh@394 -- # pt= 00:04:42.647 03:52:36 -- scripts/common.sh@395 -- # return 1 00:04:42.647 03:52:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:42.647 1+0 records in 00:04:42.647 1+0 records out 00:04:42.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0013969 s, 751 MB/s 00:04:42.647 03:52:36 -- spdk/autotest.sh@105 -- # sync 00:04:42.647 03:52:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:42.647 03:52:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:42.647 03:52:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:49.218 03:52:42 -- spdk/autotest.sh@111 -- # uname -s 00:04:49.218 03:52:42 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:49.218 03:52:42 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:49.218 03:52:42 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:51.125 Hugepages 00:04:51.125 node hugesize free / total 00:04:51.125 node0 1048576kB 0 / 0 00:04:51.125 node0 2048kB 0 / 0 00:04:51.125 node1 1048576kB 0 / 0 00:04:51.125 node1 2048kB 0 / 0 00:04:51.125 00:04:51.125 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:51.125 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:51.125 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:51.125 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:51.125 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:51.125 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:51.125 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:51.125 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:51.125 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:51.125 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:51.125 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:51.125 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:51.125 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:51.125 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:51.125 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:51.125 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:51.125 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:51.125 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:51.125 03:52:45 -- spdk/autotest.sh@117 -- # uname -s 00:04:51.125 03:52:45 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:51.125 03:52:45 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:51.125 03:52:45 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:53.662 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:53.662 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:53.662 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:53.662 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:53.662 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:53.662 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:53.662 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:53.662 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:53.662 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:53.662 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:53.662 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:53.662 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:53.662 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:53.662 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:53.662 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:53.922 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.212 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:58.590 03:52:52 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:59.529 03:52:53 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:59.529 03:52:53 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:59.529 03:52:53 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:59.529 03:52:53 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:59.529 03:52:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:59.529 03:52:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:59.529 03:52:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:59.529 03:52:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:59.529 03:52:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:59.529 03:52:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:59.529 03:52:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:59.529 03:52:53 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.821 Waiting for block devices as requested 00:05:02.821 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:02.821 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:02.821 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:02.821 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:02.821 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:02.821 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:02.821 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:02.821 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:02.821 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:02.821 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:03.080 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:03.080 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:03.080 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:03.339 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:03.339 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:03.339 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:03.598 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:04.976 03:52:59 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:04.976 03:52:59 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:04.976 03:52:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:04.976 03:52:59 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:05:04.977 03:52:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:04.977 03:52:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:04.977 03:52:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:04.977 03:52:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:04.977 03:52:59 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:04.977 03:52:59 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:04.977 03:52:59 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:04.977 03:52:59 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:04.977 03:52:59 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:04.977 03:52:59 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:04.977 03:52:59 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:04.977 03:52:59 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:04.977 03:52:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:04.977 03:52:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.977 03:52:59 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:04.977 03:52:59 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:04.977 03:52:59 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:04.977 03:52:59 -- common/autotest_common.sh@1543 -- # continue 00:05:04.977 03:52:59 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:04.977 03:52:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.977 03:52:59 -- common/autotest_common.sh@10 -- # set +x 00:05:04.977 03:52:59 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:04.977 03:52:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.977 03:52:59 -- common/autotest_common.sh@10 -- # set +x 00:05:04.977 03:52:59 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:08.271 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:08.271 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:11.565 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:12.943 03:53:06 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:12.943 03:53:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.943 03:53:06 -- common/autotest_common.sh@10 -- # set +x 00:05:12.943 03:53:06 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:12.943 03:53:06 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:12.943 03:53:06 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:12.943 03:53:06 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:12.943 03:53:06 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:12.943 03:53:06 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:12.943 03:53:06 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:12.943 03:53:06 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:12.943 03:53:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:12.943 03:53:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:12.943 03:53:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.943 03:53:06 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:12.943 03:53:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:12.943 03:53:07 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:12.943 03:53:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:12.943 03:53:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:12.943 03:53:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:12.943 03:53:07 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:12.943 03:53:07 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:12.943 03:53:07 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:12.943 03:53:07 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:12.943 03:53:07 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:05:12.943 03:53:07 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:05:12.943 03:53:07 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=582559 00:05:12.943 03:53:07 -- common/autotest_common.sh@1585 -- # waitforlisten 582559 00:05:12.943 03:53:07 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.943 03:53:07 -- common/autotest_common.sh@835 -- # '[' -z 582559 ']' 00:05:12.943 03:53:07 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.943 03:53:07 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.943 03:53:07 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.943 03:53:07 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.943 03:53:07 -- common/autotest_common.sh@10 -- # set +x 00:05:12.943 [2024-12-10 03:53:07.101876] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:12.943 [2024-12-10 03:53:07.101918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid582559 ] 00:05:12.943 [2024-12-10 03:53:07.159845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.943 [2024-12-10 03:53:07.199549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.203 03:53:07 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.203 03:53:07 -- common/autotest_common.sh@868 -- # return 0 00:05:13.203 03:53:07 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:13.203 03:53:07 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:13.203 03:53:07 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:16.491 nvme0n1 00:05:16.491 03:53:10 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:16.491 [2024-12-10 03:53:10.549346] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:16.491 request: 00:05:16.491 { 00:05:16.491 "nvme_ctrlr_name": "nvme0", 00:05:16.491 "password": "test", 00:05:16.491 "method": "bdev_nvme_opal_revert", 00:05:16.491 "req_id": 1 00:05:16.491 } 00:05:16.491 Got JSON-RPC error response 00:05:16.491 response: 00:05:16.491 { 00:05:16.491 "code": -32602, 00:05:16.491 "message": "Invalid parameters" 00:05:16.491 } 00:05:16.491 03:53:10 -- common/autotest_common.sh@1591 -- # true 00:05:16.491 03:53:10 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:16.491 03:53:10 -- common/autotest_common.sh@1595 -- # killprocess 582559 00:05:16.491 03:53:10 -- common/autotest_common.sh@954 -- # '[' -z 582559 ']' 00:05:16.491 03:53:10 -- common/autotest_common.sh@958 -- # kill -0 582559 00:05:16.491 03:53:10 -- common/autotest_common.sh@959 -- # uname 00:05:16.491 03:53:10 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.491 03:53:10 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 582559 00:05:16.491 03:53:10 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.491 03:53:10 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.491 03:53:10 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 582559' 00:05:16.491 killing process with pid 582559 00:05:16.491 03:53:10 -- common/autotest_common.sh@973 -- # kill 582559 00:05:16.491 03:53:10 -- common/autotest_common.sh@978 -- # wait 582559 00:05:20.685 03:53:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:20.685 03:53:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:20.685 03:53:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:20.685 03:53:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:20.685 03:53:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:20.685 03:53:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.685 03:53:14 -- common/autotest_common.sh@10 -- # set +x 00:05:20.685 03:53:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:20.685 03:53:14 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:20.685 03:53:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.685 03:53:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.685 03:53:14 -- common/autotest_common.sh@10 -- # set +x 00:05:20.685 ************************************ 00:05:20.685 START TEST env 00:05:20.685 ************************************ 00:05:20.685 03:53:14 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:20.685 * Looking for test storage... 00:05:20.685 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:20.685 03:53:14 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.685 03:53:14 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.685 03:53:14 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.685 03:53:14 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.685 03:53:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.685 03:53:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.685 03:53:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.685 03:53:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.685 03:53:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.685 03:53:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.685 03:53:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.685 03:53:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.685 03:53:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.685 03:53:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.685 03:53:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.685 03:53:14 env -- scripts/common.sh@344 -- # case "$op" in 00:05:20.685 03:53:14 env -- scripts/common.sh@345 -- # : 1 00:05:20.685 03:53:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.685 03:53:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.685 03:53:14 env -- scripts/common.sh@365 -- # decimal 1 00:05:20.685 03:53:14 env -- scripts/common.sh@353 -- # local d=1 00:05:20.685 03:53:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.685 03:53:14 env -- scripts/common.sh@355 -- # echo 1 00:05:20.685 03:53:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.685 03:53:14 env -- scripts/common.sh@366 -- # decimal 2 00:05:20.685 03:53:14 env -- scripts/common.sh@353 -- # local d=2 00:05:20.685 03:53:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.685 03:53:14 env -- scripts/common.sh@355 -- # echo 2 00:05:20.685 03:53:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.685 03:53:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.685 03:53:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.685 03:53:14 env -- scripts/common.sh@368 -- # return 0 00:05:20.685 03:53:14 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.685 03:53:14 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.685 --rc genhtml_branch_coverage=1 00:05:20.685 --rc genhtml_function_coverage=1 00:05:20.685 --rc genhtml_legend=1 00:05:20.685 --rc geninfo_all_blocks=1 00:05:20.685 --rc geninfo_unexecuted_blocks=1 00:05:20.685 00:05:20.685 ' 00:05:20.685 03:53:14 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.685 --rc genhtml_branch_coverage=1 00:05:20.685 --rc genhtml_function_coverage=1 00:05:20.685 --rc genhtml_legend=1 00:05:20.685 --rc geninfo_all_blocks=1 00:05:20.685 --rc geninfo_unexecuted_blocks=1 00:05:20.685 00:05:20.685 ' 00:05:20.685 03:53:14 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.685 --rc genhtml_branch_coverage=1 00:05:20.685 --rc genhtml_function_coverage=1 00:05:20.685 --rc genhtml_legend=1 00:05:20.685 --rc geninfo_all_blocks=1 00:05:20.685 --rc geninfo_unexecuted_blocks=1 00:05:20.685 00:05:20.685 ' 00:05:20.685 03:53:14 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.685 --rc genhtml_branch_coverage=1 00:05:20.685 --rc genhtml_function_coverage=1 00:05:20.685 --rc genhtml_legend=1 00:05:20.685 --rc geninfo_all_blocks=1 00:05:20.685 --rc geninfo_unexecuted_blocks=1 00:05:20.685 00:05:20.685 ' 00:05:20.685 03:53:14 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:20.685 03:53:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.685 03:53:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.685 03:53:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.685 ************************************ 00:05:20.685 START TEST env_memory 00:05:20.685 ************************************ 00:05:20.685 03:53:14 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:20.685 00:05:20.685 00:05:20.685 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.685 http://cunit.sourceforge.net/ 00:05:20.685 00:05:20.685 00:05:20.685 Suite: memory 00:05:20.686 Test: alloc and free memory map ...[2024-12-10 03:53:14.767983] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:20.686 passed 00:05:20.686 Test: mem map translation ...[2024-12-10 03:53:14.784996] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:20.686 [2024-12-10 03:53:14.785010] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:20.686 [2024-12-10 03:53:14.785041] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:20.686 [2024-12-10 03:53:14.785047] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:20.686 passed 00:05:20.686 Test: mem map registration ...[2024-12-10 03:53:14.818196] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:20.686 [2024-12-10 03:53:14.818208] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:20.686 passed 00:05:20.686 Test: mem map adjacent registrations ...passed 00:05:20.686 00:05:20.686 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.686 suites 1 1 n/a 0 0 00:05:20.686 tests 4 4 4 0 0 00:05:20.686 asserts 152 152 152 0 n/a 00:05:20.686 00:05:20.686 Elapsed time = 0.123 seconds 00:05:20.686 00:05:20.686 real 0m0.135s 00:05:20.686 user 0m0.128s 00:05:20.686 sys 0m0.007s 00:05:20.686 03:53:14 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.686 03:53:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:20.686 ************************************ 00:05:20.686 END TEST env_memory 00:05:20.686 ************************************ 00:05:20.686 03:53:14 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:20.686 03:53:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.686 03:53:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.686 03:53:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.686 ************************************ 00:05:20.686 START TEST env_vtophys 00:05:20.686 ************************************ 00:05:20.686 03:53:14 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:20.686 EAL: lib.eal log level changed from notice to debug 00:05:20.686 EAL: Detected lcore 0 as core 0 on socket 0 00:05:20.686 EAL: Detected lcore 1 as core 1 on socket 0 00:05:20.686 EAL: Detected lcore 2 as core 2 on socket 0 00:05:20.686 EAL: Detected lcore 3 as core 3 on socket 0 00:05:20.686 EAL: Detected lcore 4 as core 4 on socket 0 00:05:20.686 EAL: Detected lcore 5 as core 5 on socket 0 00:05:20.686 EAL: Detected lcore 6 as core 6 on socket 0 00:05:20.686 EAL: Detected lcore 7 as core 8 on socket 0 00:05:20.686 EAL: Detected lcore 8 as core 9 on socket 0 00:05:20.686 EAL: Detected lcore 9 as core 10 on socket 0 00:05:20.686 EAL: Detected lcore 10 as core 11 on socket 0 00:05:20.686 EAL: Detected lcore 11 as core 12 on socket 0 00:05:20.686 EAL: Detected lcore 12 as core 13 on socket 0 00:05:20.686 EAL: Detected lcore 13 as core 14 on socket 0 00:05:20.686 EAL: Detected lcore 14 as core 16 on socket 0 00:05:20.686 EAL: Detected lcore 15 as core 17 on socket 0 00:05:20.686 EAL: Detected lcore 16 as core 18 on socket 0 00:05:20.686 EAL: Detected lcore 17 as core 19 on socket 0 00:05:20.686 EAL: Detected lcore 18 as core 20 on socket 0 00:05:20.686 EAL: Detected lcore 19 as core 21 on socket 0 00:05:20.686 EAL: Detected lcore 20 as core 22 on socket 0 00:05:20.686 EAL: Detected lcore 21 as core 24 on socket 0 00:05:20.686 EAL: Detected lcore 22 as core 25 on socket 0 00:05:20.686 EAL: Detected lcore 23 as core 26 on socket 0 00:05:20.686 EAL: Detected lcore 24 as core 27 on socket 0 00:05:20.686 EAL: Detected lcore 25 as core 28 on socket 0 00:05:20.686 EAL: Detected lcore 26 as core 29 on socket 0 00:05:20.686 EAL: Detected lcore 27 as core 30 on socket 0 00:05:20.686 EAL: Detected lcore 28 as core 0 on socket 1 00:05:20.686 EAL: Detected lcore 29 as core 1 on socket 1 00:05:20.686 EAL: Detected lcore 30 as core 2 on socket 1 00:05:20.686 EAL: Detected lcore 31 as core 3 on socket 1 00:05:20.686 EAL: Detected lcore 32 as core 4 on socket 1 00:05:20.686 EAL: Detected lcore 33 as core 5 on socket 1 00:05:20.686 EAL: Detected lcore 34 as core 6 on socket 1 00:05:20.686 EAL: Detected lcore 35 as core 8 on socket 1 00:05:20.686 EAL: Detected lcore 36 as core 9 on socket 1 00:05:20.686 EAL: Detected lcore 37 as core 10 on socket 1 00:05:20.686 EAL: Detected lcore 38 as core 11 on socket 1 00:05:20.686 EAL: Detected lcore 39 as core 12 on socket 1 00:05:20.686 EAL: Detected lcore 40 as core 13 on socket 1 00:05:20.686 EAL: Detected lcore 41 as core 14 on socket 1 00:05:20.686 EAL: Detected lcore 42 as core 16 on socket 1 00:05:20.686 EAL: Detected lcore 43 as core 17 on socket 1 00:05:20.686 EAL: Detected lcore 44 as core 18 on socket 1 00:05:20.686 EAL: Detected lcore 45 as core 19 on socket 1 00:05:20.686 EAL: Detected lcore 46 as core 20 on socket 1 00:05:20.686 EAL: Detected lcore 47 as core 21 on socket 1 00:05:20.686 EAL: Detected lcore 48 as core 22 on socket 1 00:05:20.686 EAL: Detected lcore 49 as core 24 on socket 1 00:05:20.686 EAL: Detected lcore 50 as core 25 on socket 1 00:05:20.686 EAL: Detected lcore 51 as core 26 on socket 1 00:05:20.686 EAL: Detected lcore 52 as core 27 on socket 1 00:05:20.686 EAL: Detected lcore 53 as core 28 on socket 1 00:05:20.686 EAL: Detected lcore 54 as core 29 on socket 1 00:05:20.686 EAL: Detected lcore 55 as core 30 on socket 1 00:05:20.686 EAL: Detected lcore 56 as core 0 on socket 0 00:05:20.686 EAL: Detected lcore 57 as core 1 on socket 0 00:05:20.686 EAL: Detected lcore 58 as core 2 on socket 0 00:05:20.686 EAL: Detected lcore 59 as core 3 on socket 0 00:05:20.686 EAL: Detected lcore 60 as core 4 on socket 0 00:05:20.686 EAL: Detected lcore 61 as core 5 on socket 0 00:05:20.686 EAL: Detected lcore 62 as core 6 on socket 0 00:05:20.686 EAL: Detected lcore 63 as core 8 on socket 0 00:05:20.686 EAL: Detected lcore 64 as core 9 on socket 0 00:05:20.686 EAL: Detected lcore 65 as core 10 on socket 0 00:05:20.686 EAL: Detected lcore 66 as core 11 on socket 0 00:05:20.686 EAL: Detected lcore 67 as core 12 on socket 0 00:05:20.686 EAL: Detected lcore 68 as core 13 on socket 0 00:05:20.686 EAL: Detected lcore 69 as core 14 on socket 0 00:05:20.686 EAL: Detected lcore 70 as core 16 on socket 0 00:05:20.686 EAL: Detected lcore 71 as core 17 on socket 0 00:05:20.686 EAL: Detected lcore 72 as core 18 on socket 0 00:05:20.686 EAL: Detected lcore 73 as core 19 on socket 0 00:05:20.686 EAL: Detected lcore 74 as core 20 on socket 0 00:05:20.686 EAL: Detected lcore 75 as core 21 on socket 0 00:05:20.686 EAL: Detected lcore 76 as core 22 on socket 0 00:05:20.686 EAL: Detected lcore 77 as core 24 on socket 0 00:05:20.686 EAL: Detected lcore 78 as core 25 on socket 0 00:05:20.686 EAL: Detected lcore 79 as core 26 on socket 0 00:05:20.686 EAL: Detected lcore 80 as core 27 on socket 0 00:05:20.686 EAL: Detected lcore 81 as core 28 on socket 0 00:05:20.686 EAL: Detected lcore 82 as core 29 on socket 0 00:05:20.686 EAL: Detected lcore 83 as core 30 on socket 0 00:05:20.686 EAL: Detected lcore 84 as core 0 on socket 1 00:05:20.686 EAL: Detected lcore 85 as core 1 on socket 1 00:05:20.686 EAL: Detected lcore 86 as core 2 on socket 1 00:05:20.686 EAL: Detected lcore 87 as core 3 on socket 1 00:05:20.686 EAL: Detected lcore 88 as core 4 on socket 1 00:05:20.686 EAL: Detected lcore 89 as core 5 on socket 1 00:05:20.686 EAL: Detected lcore 90 as core 6 on socket 1 00:05:20.686 EAL: Detected lcore 91 as core 8 on socket 1 00:05:20.686 EAL: Detected lcore 92 as core 9 on socket 1 00:05:20.686 EAL: Detected lcore 93 as core 10 on socket 1 00:05:20.686 EAL: Detected lcore 94 as core 11 on socket 1 00:05:20.686 EAL: Detected lcore 95 as core 12 on socket 1 00:05:20.686 EAL: Detected lcore 96 as core 13 on socket 1 00:05:20.686 EAL: Detected lcore 97 as core 14 on socket 1 00:05:20.686 EAL: Detected lcore 98 as core 16 on socket 1 00:05:20.686 EAL: Detected lcore 99 as core 17 on socket 1 00:05:20.686 EAL: Detected lcore 100 as core 18 on socket 1 00:05:20.686 EAL: Detected lcore 101 as core 19 on socket 1 00:05:20.686 EAL: Detected lcore 102 as core 20 on socket 1 00:05:20.686 EAL: Detected lcore 103 as core 21 on socket 1 00:05:20.686 EAL: Detected lcore 104 as core 22 on socket 1 00:05:20.686 EAL: Detected lcore 105 as core 24 on socket 1 00:05:20.686 EAL: Detected lcore 106 as core 25 on socket 1 00:05:20.686 EAL: Detected lcore 107 as core 26 on socket 1 00:05:20.686 EAL: Detected lcore 108 as core 27 on socket 1 00:05:20.686 EAL: Detected lcore 109 as core 28 on socket 1 00:05:20.686 EAL: Detected lcore 110 as core 29 on socket 1 00:05:20.686 EAL: Detected lcore 111 as core 30 on socket 1 00:05:20.686 EAL: Maximum logical cores by configuration: 128 00:05:20.686 EAL: Detected CPU lcores: 112 00:05:20.686 EAL: Detected NUMA nodes: 2 00:05:20.686 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:20.686 EAL: Detected shared linkage of DPDK 00:05:20.686 EAL: No shared files mode enabled, IPC will be disabled 00:05:20.686 EAL: Bus pci wants IOVA as 'DC' 00:05:20.686 EAL: Buses did not request a specific IOVA mode. 00:05:20.686 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:20.686 EAL: Selected IOVA mode 'VA' 00:05:20.686 EAL: Probing VFIO support... 00:05:20.686 EAL: IOMMU type 1 (Type 1) is supported 00:05:20.686 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:20.686 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:20.686 EAL: VFIO support initialized 00:05:20.686 EAL: Ask a virtual area of 0x2e000 bytes 00:05:20.686 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:20.686 EAL: Setting up physically contiguous memory... 00:05:20.686 EAL: Setting maximum number of open files to 524288 00:05:20.686 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:20.686 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:20.686 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:20.686 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.686 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:20.686 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.686 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.686 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:20.686 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:20.686 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.686 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:20.686 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.686 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.686 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:20.687 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:20.687 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.687 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:20.687 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.687 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.687 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:20.687 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:20.687 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.687 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:20.687 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.687 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.687 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:20.687 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:20.687 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:20.687 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.687 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:20.687 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.687 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.687 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:20.687 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:20.687 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.687 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:20.687 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.687 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.687 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:20.687 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:20.687 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.687 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:20.687 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.687 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.687 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:20.687 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:20.687 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.687 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:20.687 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.687 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.687 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:20.687 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:20.687 EAL: Hugepages will be freed exactly as allocated. 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: TSC frequency is ~2700000 KHz 00:05:20.687 EAL: Main lcore 0 is ready (tid=7f2af86fda00;cpuset=[0]) 00:05:20.687 EAL: Trying to obtain current memory policy. 00:05:20.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.687 EAL: Restoring previous memory policy: 0 00:05:20.687 EAL: request: mp_malloc_sync 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: Heap on socket 0 was expanded by 2MB 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:20.687 EAL: Mem event callback 'spdk:(nil)' registered 00:05:20.687 00:05:20.687 00:05:20.687 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.687 http://cunit.sourceforge.net/ 00:05:20.687 00:05:20.687 00:05:20.687 Suite: components_suite 00:05:20.687 Test: vtophys_malloc_test ...passed 00:05:20.687 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:20.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.687 EAL: Restoring previous memory policy: 4 00:05:20.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.687 EAL: request: mp_malloc_sync 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: Heap on socket 0 was expanded by 4MB 00:05:20.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.687 EAL: request: mp_malloc_sync 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: Heap on socket 0 was shrunk by 4MB 00:05:20.687 EAL: Trying to obtain current memory policy. 00:05:20.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.687 EAL: Restoring previous memory policy: 4 00:05:20.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.687 EAL: request: mp_malloc_sync 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: Heap on socket 0 was expanded by 6MB 00:05:20.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.687 EAL: request: mp_malloc_sync 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: Heap on socket 0 was shrunk by 6MB 00:05:20.687 EAL: Trying to obtain current memory policy. 00:05:20.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.687 EAL: Restoring previous memory policy: 4 00:05:20.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.687 EAL: request: mp_malloc_sync 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: Heap on socket 0 was expanded by 10MB 00:05:20.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.687 EAL: request: mp_malloc_sync 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: Heap on socket 0 was shrunk by 10MB 00:05:20.687 EAL: Trying to obtain current memory policy. 00:05:20.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.687 EAL: Restoring previous memory policy: 4 00:05:20.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.687 EAL: request: mp_malloc_sync 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: Heap on socket 0 was expanded by 18MB 00:05:20.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.687 EAL: request: mp_malloc_sync 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: Heap on socket 0 was shrunk by 18MB 00:05:20.687 EAL: Trying to obtain current memory policy. 00:05:20.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.687 EAL: Restoring previous memory policy: 4 00:05:20.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.687 EAL: request: mp_malloc_sync 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: Heap on socket 0 was expanded by 34MB 00:05:20.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.687 EAL: request: mp_malloc_sync 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: Heap on socket 0 was shrunk by 34MB 00:05:20.687 EAL: Trying to obtain current memory policy. 00:05:20.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.687 EAL: Restoring previous memory policy: 4 00:05:20.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.687 EAL: request: mp_malloc_sync 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: Heap on socket 0 was expanded by 66MB 00:05:20.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.687 EAL: request: mp_malloc_sync 00:05:20.687 EAL: No shared files mode enabled, IPC is disabled 00:05:20.687 EAL: Heap on socket 0 was shrunk by 66MB 00:05:20.687 EAL: Trying to obtain current memory policy. 00:05:20.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.946 EAL: Restoring previous memory policy: 4 00:05:20.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.946 EAL: request: mp_malloc_sync 00:05:20.946 EAL: No shared files mode enabled, IPC is disabled 00:05:20.946 EAL: Heap on socket 0 was expanded by 130MB 00:05:20.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.946 EAL: request: mp_malloc_sync 00:05:20.946 EAL: No shared files mode enabled, IPC is disabled 00:05:20.946 EAL: Heap on socket 0 was shrunk by 130MB 00:05:20.946 EAL: Trying to obtain current memory policy. 00:05:20.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.946 EAL: Restoring previous memory policy: 4 00:05:20.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.946 EAL: request: mp_malloc_sync 00:05:20.946 EAL: No shared files mode enabled, IPC is disabled 00:05:20.946 EAL: Heap on socket 0 was expanded by 258MB 00:05:20.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.946 EAL: request: mp_malloc_sync 00:05:20.946 EAL: No shared files mode enabled, IPC is disabled 00:05:20.946 EAL: Heap on socket 0 was shrunk by 258MB 00:05:20.946 EAL: Trying to obtain current memory policy. 00:05:20.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.204 EAL: Restoring previous memory policy: 4 00:05:21.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.204 EAL: request: mp_malloc_sync 00:05:21.204 EAL: No shared files mode enabled, IPC is disabled 00:05:21.204 EAL: Heap on socket 0 was expanded by 514MB 00:05:21.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.204 EAL: request: mp_malloc_sync 00:05:21.204 EAL: No shared files mode enabled, IPC is disabled 00:05:21.204 EAL: Heap on socket 0 was shrunk by 514MB 00:05:21.204 EAL: Trying to obtain current memory policy. 00:05:21.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.463 EAL: Restoring previous memory policy: 4 00:05:21.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.463 EAL: request: mp_malloc_sync 00:05:21.463 EAL: No shared files mode enabled, IPC is disabled 00:05:21.463 EAL: Heap on socket 0 was expanded by 1026MB 00:05:21.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.723 EAL: request: mp_malloc_sync 00:05:21.723 EAL: No shared files mode enabled, IPC is disabled 00:05:21.723 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:21.723 passed 00:05:21.723 00:05:21.723 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.723 suites 1 1 n/a 0 0 00:05:21.723 tests 2 2 2 0 0 00:05:21.723 asserts 497 497 497 0 n/a 00:05:21.723 00:05:21.723 Elapsed time = 0.953 seconds 00:05:21.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.723 EAL: request: mp_malloc_sync 00:05:21.723 EAL: No shared files mode enabled, IPC is disabled 00:05:21.723 EAL: Heap on socket 0 was shrunk by 2MB 00:05:21.723 EAL: No shared files mode enabled, IPC is disabled 00:05:21.723 EAL: No shared files mode enabled, IPC is disabled 00:05:21.723 EAL: No shared files mode enabled, IPC is disabled 00:05:21.723 00:05:21.723 real 0m1.075s 00:05:21.723 user 0m0.635s 00:05:21.723 sys 0m0.408s 00:05:21.723 03:53:16 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.723 03:53:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:21.723 ************************************ 00:05:21.723 END TEST env_vtophys 00:05:21.723 ************************************ 00:05:21.723 03:53:16 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:21.723 03:53:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.723 03:53:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.723 03:53:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.723 ************************************ 00:05:21.723 START TEST env_pci 00:05:21.723 ************************************ 00:05:21.723 03:53:16 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:21.723 00:05:21.723 00:05:21.723 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.723 http://cunit.sourceforge.net/ 00:05:21.723 00:05:21.723 00:05:21.723 Suite: pci 00:05:21.723 Test: pci_hook ...[2024-12-10 03:53:16.095501] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 584380 has claimed it 00:05:21.982 EAL: Cannot find device (10000:00:01.0) 00:05:21.982 EAL: Failed to attach device on primary process 00:05:21.982 passed 00:05:21.982 00:05:21.982 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.982 suites 1 1 n/a 0 0 00:05:21.982 tests 1 1 1 0 0 00:05:21.982 asserts 25 25 25 0 n/a 00:05:21.982 00:05:21.982 Elapsed time = 0.028 seconds 00:05:21.982 00:05:21.982 real 0m0.048s 00:05:21.982 user 0m0.020s 00:05:21.982 sys 0m0.027s 00:05:21.982 03:53:16 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.982 03:53:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:21.982 ************************************ 00:05:21.982 END TEST env_pci 00:05:21.982 ************************************ 00:05:21.982 03:53:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:21.982 03:53:16 env -- env/env.sh@15 -- # uname 00:05:21.982 03:53:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:21.982 03:53:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:21.982 03:53:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.982 03:53:16 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:21.982 03:53:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.982 03:53:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.982 ************************************ 00:05:21.982 START TEST env_dpdk_post_init 00:05:21.982 ************************************ 00:05:21.982 03:53:16 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.982 EAL: Detected CPU lcores: 112 00:05:21.982 EAL: Detected NUMA nodes: 2 00:05:21.982 EAL: Detected shared linkage of DPDK 00:05:21.982 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:21.982 EAL: Selected IOVA mode 'VA' 00:05:21.982 EAL: VFIO support initialized 00:05:21.982 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:21.982 EAL: Using IOMMU type 1 (Type 1) 00:05:21.983 EAL: Ignore mapping IO port bar(1) 00:05:21.983 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:21.983 EAL: Ignore mapping IO port bar(1) 00:05:21.983 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:21.983 EAL: Ignore mapping IO port bar(1) 00:05:21.983 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:21.983 EAL: Ignore mapping IO port bar(1) 00:05:21.983 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:21.983 EAL: Ignore mapping IO port bar(1) 00:05:21.983 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:22.242 EAL: Ignore mapping IO port bar(1) 00:05:22.242 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:22.242 EAL: Ignore mapping IO port bar(1) 00:05:22.242 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:22.242 EAL: Ignore mapping IO port bar(1) 00:05:22.242 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:22.242 EAL: Ignore mapping IO port bar(1) 00:05:22.242 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:22.242 EAL: Ignore mapping IO port bar(1) 00:05:22.242 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:22.242 EAL: Ignore mapping IO port bar(1) 00:05:22.242 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:22.242 EAL: Ignore mapping IO port bar(1) 00:05:22.242 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:22.242 EAL: Ignore mapping IO port bar(1) 00:05:22.242 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:22.242 EAL: Ignore mapping IO port bar(1) 00:05:22.242 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:22.242 EAL: Ignore mapping IO port bar(1) 00:05:22.242 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:22.242 EAL: Ignore mapping IO port bar(1) 00:05:22.242 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:23.179 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:28.449 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:28.449 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:28.708 Starting DPDK initialization... 00:05:28.708 Starting SPDK post initialization... 00:05:28.708 SPDK NVMe probe 00:05:28.708 Attaching to 0000:d8:00.0 00:05:28.708 Attached to 0000:d8:00.0 00:05:28.708 Cleaning up... 00:05:28.708 00:05:28.708 real 0m6.669s 00:05:28.708 user 0m5.104s 00:05:28.708 sys 0m0.626s 00:05:28.708 03:53:22 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.708 03:53:22 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.708 ************************************ 00:05:28.708 END TEST env_dpdk_post_init 00:05:28.708 ************************************ 00:05:28.708 03:53:22 env -- env/env.sh@26 -- # uname 00:05:28.708 03:53:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:28.708 03:53:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.708 03:53:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.708 03:53:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.708 03:53:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.708 ************************************ 00:05:28.708 START TEST env_mem_callbacks 00:05:28.708 ************************************ 00:05:28.708 03:53:22 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.708 EAL: Detected CPU lcores: 112 00:05:28.708 EAL: Detected NUMA nodes: 2 00:05:28.708 EAL: Detected shared linkage of DPDK 00:05:28.708 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.708 EAL: Selected IOVA mode 'VA' 00:05:28.708 EAL: VFIO support initialized 00:05:28.708 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.708 00:05:28.708 00:05:28.708 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.708 http://cunit.sourceforge.net/ 00:05:28.708 00:05:28.708 00:05:28.708 Suite: memory 00:05:28.708 Test: test ... 00:05:28.708 register 0x200000200000 2097152 00:05:28.708 malloc 3145728 00:05:28.708 register 0x200000400000 4194304 00:05:28.708 buf 0x200000500000 len 3145728 PASSED 00:05:28.708 malloc 64 00:05:28.708 buf 0x2000004fff40 len 64 PASSED 00:05:28.708 malloc 4194304 00:05:28.708 register 0x200000800000 6291456 00:05:28.708 buf 0x200000a00000 len 4194304 PASSED 00:05:28.708 free 0x200000500000 3145728 00:05:28.708 free 0x2000004fff40 64 00:05:28.708 unregister 0x200000400000 4194304 PASSED 00:05:28.708 free 0x200000a00000 4194304 00:05:28.708 unregister 0x200000800000 6291456 PASSED 00:05:28.708 malloc 8388608 00:05:28.708 register 0x200000400000 10485760 00:05:28.708 buf 0x200000600000 len 8388608 PASSED 00:05:28.708 free 0x200000600000 8388608 00:05:28.708 unregister 0x200000400000 10485760 PASSED 00:05:28.708 passed 00:05:28.708 00:05:28.708 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.708 suites 1 1 n/a 0 0 00:05:28.708 tests 1 1 1 0 0 00:05:28.708 asserts 15 15 15 0 n/a 00:05:28.708 00:05:28.708 Elapsed time = 0.004 seconds 00:05:28.708 00:05:28.708 real 0m0.052s 00:05:28.708 user 0m0.017s 00:05:28.708 sys 0m0.035s 00:05:28.708 03:53:22 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.708 03:53:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:28.708 ************************************ 00:05:28.708 END TEST env_mem_callbacks 00:05:28.708 ************************************ 00:05:28.708 00:05:28.708 real 0m8.476s 00:05:28.708 user 0m6.114s 00:05:28.708 sys 0m1.427s 00:05:28.708 03:53:23 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.708 03:53:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.708 ************************************ 00:05:28.708 END TEST env 00:05:28.708 ************************************ 00:05:28.708 03:53:23 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:28.708 03:53:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.708 03:53:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.708 03:53:23 -- common/autotest_common.sh@10 -- # set +x 00:05:28.708 ************************************ 00:05:28.708 START TEST rpc 00:05:28.708 ************************************ 00:05:28.708 03:53:23 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:28.968 * Looking for test storage... 00:05:28.968 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.968 03:53:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.968 03:53:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.968 03:53:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.968 03:53:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.968 03:53:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.968 03:53:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.968 03:53:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.968 03:53:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.968 03:53:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.968 03:53:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.968 03:53:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.968 03:53:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:28.968 03:53:23 rpc -- scripts/common.sh@345 -- # : 1 00:05:28.968 03:53:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.968 03:53:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.968 03:53:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:28.968 03:53:23 rpc -- scripts/common.sh@353 -- # local d=1 00:05:28.968 03:53:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.968 03:53:23 rpc -- scripts/common.sh@355 -- # echo 1 00:05:28.968 03:53:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.968 03:53:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:28.968 03:53:23 rpc -- scripts/common.sh@353 -- # local d=2 00:05:28.968 03:53:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.968 03:53:23 rpc -- scripts/common.sh@355 -- # echo 2 00:05:28.968 03:53:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.968 03:53:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.968 03:53:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.968 03:53:23 rpc -- scripts/common.sh@368 -- # return 0 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.968 --rc genhtml_branch_coverage=1 00:05:28.968 --rc genhtml_function_coverage=1 00:05:28.968 --rc genhtml_legend=1 00:05:28.968 --rc geninfo_all_blocks=1 00:05:28.968 --rc geninfo_unexecuted_blocks=1 00:05:28.968 00:05:28.968 ' 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.968 --rc genhtml_branch_coverage=1 00:05:28.968 --rc genhtml_function_coverage=1 00:05:28.968 --rc genhtml_legend=1 00:05:28.968 --rc geninfo_all_blocks=1 00:05:28.968 --rc geninfo_unexecuted_blocks=1 00:05:28.968 00:05:28.968 ' 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.968 --rc genhtml_branch_coverage=1 00:05:28.968 --rc genhtml_function_coverage=1 00:05:28.968 --rc genhtml_legend=1 00:05:28.968 --rc geninfo_all_blocks=1 00:05:28.968 --rc geninfo_unexecuted_blocks=1 00:05:28.968 00:05:28.968 ' 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.968 --rc genhtml_branch_coverage=1 00:05:28.968 --rc genhtml_function_coverage=1 00:05:28.968 --rc genhtml_legend=1 00:05:28.968 --rc geninfo_all_blocks=1 00:05:28.968 --rc geninfo_unexecuted_blocks=1 00:05:28.968 00:05:28.968 ' 00:05:28.968 03:53:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=585831 00:05:28.968 03:53:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.968 03:53:23 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:28.968 03:53:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 585831 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@835 -- # '[' -z 585831 ']' 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.968 03:53:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.968 [2024-12-10 03:53:23.255926] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:28.968 [2024-12-10 03:53:23.255969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid585831 ] 00:05:28.968 [2024-12-10 03:53:23.313074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.968 [2024-12-10 03:53:23.349221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:28.968 [2024-12-10 03:53:23.349256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 585831' to capture a snapshot of events at runtime. 00:05:28.968 [2024-12-10 03:53:23.349263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:28.968 [2024-12-10 03:53:23.349274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:28.968 [2024-12-10 03:53:23.349280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid585831 for offline analysis/debug. 00:05:28.968 [2024-12-10 03:53:23.349776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.227 03:53:23 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.227 03:53:23 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.227 03:53:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:29.227 03:53:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:29.227 03:53:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:29.227 03:53:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:29.228 03:53:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.228 03:53:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.228 03:53:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.228 ************************************ 00:05:29.228 START TEST rpc_integrity 00:05:29.228 ************************************ 00:05:29.228 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:29.228 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:29.228 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.228 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.228 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.228 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:29.228 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:29.487 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:29.487 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:29.487 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.487 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.487 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.487 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:29.487 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:29.487 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.487 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.487 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.487 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:29.487 { 00:05:29.487 "name": "Malloc0", 00:05:29.487 "aliases": [ 00:05:29.487 "3d739caa-39ad-4ca9-b5ba-f79af40933b5" 00:05:29.487 ], 00:05:29.487 "product_name": "Malloc disk", 00:05:29.487 "block_size": 512, 00:05:29.487 "num_blocks": 16384, 00:05:29.487 "uuid": "3d739caa-39ad-4ca9-b5ba-f79af40933b5", 00:05:29.487 "assigned_rate_limits": { 00:05:29.487 "rw_ios_per_sec": 0, 00:05:29.487 "rw_mbytes_per_sec": 0, 00:05:29.487 "r_mbytes_per_sec": 0, 00:05:29.487 "w_mbytes_per_sec": 0 00:05:29.487 }, 00:05:29.487 "claimed": false, 00:05:29.487 "zoned": false, 00:05:29.487 "supported_io_types": { 00:05:29.487 "read": true, 00:05:29.487 "write": true, 00:05:29.487 "unmap": true, 00:05:29.487 "flush": true, 00:05:29.487 "reset": true, 00:05:29.487 "nvme_admin": false, 00:05:29.487 "nvme_io": false, 00:05:29.487 "nvme_io_md": false, 00:05:29.487 "write_zeroes": true, 00:05:29.487 "zcopy": true, 00:05:29.487 "get_zone_info": false, 00:05:29.487 "zone_management": false, 00:05:29.487 "zone_append": false, 00:05:29.487 "compare": false, 00:05:29.487 "compare_and_write": false, 00:05:29.487 "abort": true, 00:05:29.487 "seek_hole": false, 00:05:29.487 "seek_data": false, 00:05:29.487 "copy": true, 00:05:29.487 "nvme_iov_md": false 00:05:29.487 }, 00:05:29.487 "memory_domains": [ 00:05:29.487 { 00:05:29.487 "dma_device_id": "system", 00:05:29.487 "dma_device_type": 1 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.487 "dma_device_type": 2 00:05:29.487 } 00:05:29.487 ], 00:05:29.487 "driver_specific": {} 00:05:29.487 } 00:05:29.487 ]' 00:05:29.487 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:29.487 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:29.487 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:29.487 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.487 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.487 [2024-12-10 03:53:23.703658] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:29.487 [2024-12-10 03:53:23.703685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:29.487 [2024-12-10 03:53:23.703696] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1023e60 00:05:29.487 [2024-12-10 03:53:23.703702] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:29.487 [2024-12-10 03:53:23.704738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:29.487 [2024-12-10 03:53:23.704759] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:29.487 Passthru0 00:05:29.487 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.487 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:29.487 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.487 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.487 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.487 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:29.487 { 00:05:29.487 "name": "Malloc0", 00:05:29.487 "aliases": [ 00:05:29.487 "3d739caa-39ad-4ca9-b5ba-f79af40933b5" 00:05:29.487 ], 00:05:29.487 "product_name": "Malloc disk", 00:05:29.487 "block_size": 512, 00:05:29.487 "num_blocks": 16384, 00:05:29.487 "uuid": "3d739caa-39ad-4ca9-b5ba-f79af40933b5", 00:05:29.487 "assigned_rate_limits": { 00:05:29.487 "rw_ios_per_sec": 0, 00:05:29.487 "rw_mbytes_per_sec": 0, 00:05:29.487 "r_mbytes_per_sec": 0, 00:05:29.487 "w_mbytes_per_sec": 0 00:05:29.487 }, 00:05:29.487 "claimed": true, 00:05:29.487 "claim_type": "exclusive_write", 00:05:29.487 "zoned": false, 00:05:29.487 "supported_io_types": { 00:05:29.487 "read": true, 00:05:29.487 "write": true, 00:05:29.487 "unmap": true, 00:05:29.487 "flush": true, 00:05:29.487 "reset": true, 00:05:29.487 "nvme_admin": false, 00:05:29.487 "nvme_io": false, 00:05:29.487 "nvme_io_md": false, 00:05:29.487 "write_zeroes": true, 00:05:29.487 "zcopy": true, 00:05:29.487 "get_zone_info": false, 00:05:29.487 "zone_management": false, 00:05:29.487 "zone_append": false, 00:05:29.487 "compare": false, 00:05:29.487 "compare_and_write": false, 00:05:29.487 "abort": true, 00:05:29.487 "seek_hole": false, 00:05:29.487 "seek_data": false, 00:05:29.487 "copy": true, 00:05:29.487 "nvme_iov_md": false 00:05:29.487 }, 00:05:29.487 "memory_domains": [ 00:05:29.487 { 00:05:29.487 "dma_device_id": "system", 00:05:29.487 "dma_device_type": 1 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.487 "dma_device_type": 2 00:05:29.487 } 00:05:29.487 ], 00:05:29.487 "driver_specific": {} 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "name": "Passthru0", 00:05:29.487 "aliases": [ 00:05:29.487 "a41521cc-f7a7-5a45-85af-12e997cdeb86" 00:05:29.487 ], 00:05:29.487 "product_name": "passthru", 00:05:29.487 "block_size": 512, 00:05:29.487 "num_blocks": 16384, 00:05:29.487 "uuid": "a41521cc-f7a7-5a45-85af-12e997cdeb86", 00:05:29.487 "assigned_rate_limits": { 00:05:29.487 "rw_ios_per_sec": 0, 00:05:29.487 "rw_mbytes_per_sec": 0, 00:05:29.487 "r_mbytes_per_sec": 0, 00:05:29.487 "w_mbytes_per_sec": 0 00:05:29.487 }, 00:05:29.487 "claimed": false, 00:05:29.487 "zoned": false, 00:05:29.487 "supported_io_types": { 00:05:29.487 "read": true, 00:05:29.487 "write": true, 00:05:29.487 "unmap": true, 00:05:29.487 "flush": true, 00:05:29.487 "reset": true, 00:05:29.487 "nvme_admin": false, 00:05:29.487 "nvme_io": false, 00:05:29.487 "nvme_io_md": false, 00:05:29.487 "write_zeroes": true, 00:05:29.487 "zcopy": true, 00:05:29.488 "get_zone_info": false, 00:05:29.488 "zone_management": false, 00:05:29.488 "zone_append": false, 00:05:29.488 "compare": false, 00:05:29.488 "compare_and_write": false, 00:05:29.488 "abort": true, 00:05:29.488 "seek_hole": false, 00:05:29.488 "seek_data": false, 00:05:29.488 "copy": true, 00:05:29.488 "nvme_iov_md": false 00:05:29.488 }, 00:05:29.488 "memory_domains": [ 00:05:29.488 { 00:05:29.488 "dma_device_id": "system", 00:05:29.488 "dma_device_type": 1 00:05:29.488 }, 00:05:29.488 { 00:05:29.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.488 "dma_device_type": 2 00:05:29.488 } 00:05:29.488 ], 00:05:29.488 "driver_specific": { 00:05:29.488 "passthru": { 00:05:29.488 "name": "Passthru0", 00:05:29.488 "base_bdev_name": "Malloc0" 00:05:29.488 } 00:05:29.488 } 00:05:29.488 } 00:05:29.488 ]' 00:05:29.488 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:29.488 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:29.488 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:29.488 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.488 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.488 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.488 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:29.488 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.488 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.488 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.488 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:29.488 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.488 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.488 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.488 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:29.488 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:29.488 03:53:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:29.488 00:05:29.488 real 0m0.249s 00:05:29.488 user 0m0.162s 00:05:29.488 sys 0m0.032s 00:05:29.488 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.488 03:53:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.488 ************************************ 00:05:29.488 END TEST rpc_integrity 00:05:29.488 ************************************ 00:05:29.488 03:53:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:29.488 03:53:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.488 03:53:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.488 03:53:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.747 ************************************ 00:05:29.747 START TEST rpc_plugins 00:05:29.747 ************************************ 00:05:29.747 03:53:23 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:29.747 03:53:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:29.747 03:53:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.747 03:53:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.747 03:53:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.747 03:53:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:29.747 03:53:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:29.747 03:53:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.747 03:53:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.747 03:53:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.747 03:53:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:29.747 { 00:05:29.747 "name": "Malloc1", 00:05:29.747 "aliases": [ 00:05:29.747 "c40c6207-be73-4ffc-bbc4-1888a895c29b" 00:05:29.747 ], 00:05:29.747 "product_name": "Malloc disk", 00:05:29.747 "block_size": 4096, 00:05:29.747 "num_blocks": 256, 00:05:29.747 "uuid": "c40c6207-be73-4ffc-bbc4-1888a895c29b", 00:05:29.747 "assigned_rate_limits": { 00:05:29.747 "rw_ios_per_sec": 0, 00:05:29.747 "rw_mbytes_per_sec": 0, 00:05:29.747 "r_mbytes_per_sec": 0, 00:05:29.747 "w_mbytes_per_sec": 0 00:05:29.747 }, 00:05:29.747 "claimed": false, 00:05:29.747 "zoned": false, 00:05:29.747 "supported_io_types": { 00:05:29.747 "read": true, 00:05:29.747 "write": true, 00:05:29.747 "unmap": true, 00:05:29.747 "flush": true, 00:05:29.747 "reset": true, 00:05:29.747 "nvme_admin": false, 00:05:29.747 "nvme_io": false, 00:05:29.747 "nvme_io_md": false, 00:05:29.747 "write_zeroes": true, 00:05:29.747 "zcopy": true, 00:05:29.747 "get_zone_info": false, 00:05:29.747 "zone_management": false, 00:05:29.747 "zone_append": false, 00:05:29.747 "compare": false, 00:05:29.747 "compare_and_write": false, 00:05:29.747 "abort": true, 00:05:29.747 "seek_hole": false, 00:05:29.747 "seek_data": false, 00:05:29.747 "copy": true, 00:05:29.747 "nvme_iov_md": false 00:05:29.747 }, 00:05:29.747 "memory_domains": [ 00:05:29.747 { 00:05:29.747 "dma_device_id": "system", 00:05:29.747 "dma_device_type": 1 00:05:29.747 }, 00:05:29.747 { 00:05:29.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.747 "dma_device_type": 2 00:05:29.747 } 00:05:29.747 ], 00:05:29.747 "driver_specific": {} 00:05:29.747 } 00:05:29.747 ]' 00:05:29.747 03:53:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:29.747 03:53:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:29.747 03:53:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:29.747 03:53:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.747 03:53:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.747 03:53:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.747 03:53:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:29.747 03:53:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.747 03:53:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.747 03:53:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.747 03:53:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:29.747 03:53:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:29.747 03:53:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:29.747 00:05:29.747 real 0m0.137s 00:05:29.747 user 0m0.085s 00:05:29.747 sys 0m0.020s 00:05:29.747 03:53:24 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.747 03:53:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.747 ************************************ 00:05:29.747 END TEST rpc_plugins 00:05:29.747 ************************************ 00:05:29.747 03:53:24 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:29.747 03:53:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.747 03:53:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.747 03:53:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.747 ************************************ 00:05:29.747 START TEST rpc_trace_cmd_test 00:05:29.747 ************************************ 00:05:29.747 03:53:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:29.747 03:53:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:29.747 03:53:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:29.747 03:53:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.747 03:53:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.748 03:53:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.748 03:53:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:29.748 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid585831", 00:05:29.748 "tpoint_group_mask": "0x8", 00:05:29.748 "iscsi_conn": { 00:05:29.748 "mask": "0x2", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "scsi": { 00:05:29.748 "mask": "0x4", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "bdev": { 00:05:29.748 "mask": "0x8", 00:05:29.748 "tpoint_mask": "0xffffffffffffffff" 00:05:29.748 }, 00:05:29.748 "nvmf_rdma": { 00:05:29.748 "mask": "0x10", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "nvmf_tcp": { 00:05:29.748 "mask": "0x20", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "ftl": { 00:05:29.748 "mask": "0x40", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "blobfs": { 00:05:29.748 "mask": "0x80", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "dsa": { 00:05:29.748 "mask": "0x200", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "thread": { 00:05:29.748 "mask": "0x400", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "nvme_pcie": { 00:05:29.748 "mask": "0x800", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "iaa": { 00:05:29.748 "mask": "0x1000", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "nvme_tcp": { 00:05:29.748 "mask": "0x2000", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "bdev_nvme": { 00:05:29.748 "mask": "0x4000", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "sock": { 00:05:29.748 "mask": "0x8000", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "blob": { 00:05:29.748 "mask": "0x10000", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "bdev_raid": { 00:05:29.748 "mask": "0x20000", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 }, 00:05:29.748 "scheduler": { 00:05:29.748 "mask": "0x40000", 00:05:29.748 "tpoint_mask": "0x0" 00:05:29.748 } 00:05:29.748 }' 00:05:29.748 03:53:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:30.008 03:53:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:30.008 03:53:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:30.008 03:53:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:30.008 03:53:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:30.008 03:53:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:30.008 03:53:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:30.008 03:53:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:30.008 03:53:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:30.008 03:53:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:30.008 00:05:30.008 real 0m0.221s 00:05:30.008 user 0m0.175s 00:05:30.008 sys 0m0.038s 00:05:30.008 03:53:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.008 03:53:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.008 ************************************ 00:05:30.008 END TEST rpc_trace_cmd_test 00:05:30.008 ************************************ 00:05:30.008 03:53:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:30.008 03:53:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:30.008 03:53:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:30.008 03:53:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.008 03:53:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.008 03:53:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.008 ************************************ 00:05:30.008 START TEST rpc_daemon_integrity 00:05:30.008 ************************************ 00:05:30.008 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:30.008 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.008 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.008 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.008 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.008 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.008 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.268 { 00:05:30.268 "name": "Malloc2", 00:05:30.268 "aliases": [ 00:05:30.268 "e2c12097-74d1-4458-9fb2-9d26648c875d" 00:05:30.268 ], 00:05:30.268 "product_name": "Malloc disk", 00:05:30.268 "block_size": 512, 00:05:30.268 "num_blocks": 16384, 00:05:30.268 "uuid": "e2c12097-74d1-4458-9fb2-9d26648c875d", 00:05:30.268 "assigned_rate_limits": { 00:05:30.268 "rw_ios_per_sec": 0, 00:05:30.268 "rw_mbytes_per_sec": 0, 00:05:30.268 "r_mbytes_per_sec": 0, 00:05:30.268 "w_mbytes_per_sec": 0 00:05:30.268 }, 00:05:30.268 "claimed": false, 00:05:30.268 "zoned": false, 00:05:30.268 "supported_io_types": { 00:05:30.268 "read": true, 00:05:30.268 "write": true, 00:05:30.268 "unmap": true, 00:05:30.268 "flush": true, 00:05:30.268 "reset": true, 00:05:30.268 "nvme_admin": false, 00:05:30.268 "nvme_io": false, 00:05:30.268 "nvme_io_md": false, 00:05:30.268 "write_zeroes": true, 00:05:30.268 "zcopy": true, 00:05:30.268 "get_zone_info": false, 00:05:30.268 "zone_management": false, 00:05:30.268 "zone_append": false, 00:05:30.268 "compare": false, 00:05:30.268 "compare_and_write": false, 00:05:30.268 "abort": true, 00:05:30.268 "seek_hole": false, 00:05:30.268 "seek_data": false, 00:05:30.268 "copy": true, 00:05:30.268 "nvme_iov_md": false 00:05:30.268 }, 00:05:30.268 "memory_domains": [ 00:05:30.268 { 00:05:30.268 "dma_device_id": "system", 00:05:30.268 "dma_device_type": 1 00:05:30.268 }, 00:05:30.268 { 00:05:30.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.268 "dma_device_type": 2 00:05:30.268 } 00:05:30.268 ], 00:05:30.268 "driver_specific": {} 00:05:30.268 } 00:05:30.268 ]' 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.268 [2024-12-10 03:53:24.497734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:30.268 [2024-12-10 03:53:24.497761] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.268 [2024-12-10 03:53:24.497774] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10242b0 00:05:30.268 [2024-12-10 03:53:24.497779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.268 [2024-12-10 03:53:24.498708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.268 [2024-12-10 03:53:24.498727] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.268 Passthru0 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.268 { 00:05:30.268 "name": "Malloc2", 00:05:30.268 "aliases": [ 00:05:30.268 "e2c12097-74d1-4458-9fb2-9d26648c875d" 00:05:30.268 ], 00:05:30.268 "product_name": "Malloc disk", 00:05:30.268 "block_size": 512, 00:05:30.268 "num_blocks": 16384, 00:05:30.268 "uuid": "e2c12097-74d1-4458-9fb2-9d26648c875d", 00:05:30.268 "assigned_rate_limits": { 00:05:30.268 "rw_ios_per_sec": 0, 00:05:30.268 "rw_mbytes_per_sec": 0, 00:05:30.268 "r_mbytes_per_sec": 0, 00:05:30.268 "w_mbytes_per_sec": 0 00:05:30.268 }, 00:05:30.268 "claimed": true, 00:05:30.268 "claim_type": "exclusive_write", 00:05:30.268 "zoned": false, 00:05:30.268 "supported_io_types": { 00:05:30.268 "read": true, 00:05:30.268 "write": true, 00:05:30.268 "unmap": true, 00:05:30.268 "flush": true, 00:05:30.268 "reset": true, 00:05:30.268 "nvme_admin": false, 00:05:30.268 "nvme_io": false, 00:05:30.268 "nvme_io_md": false, 00:05:30.268 "write_zeroes": true, 00:05:30.268 "zcopy": true, 00:05:30.268 "get_zone_info": false, 00:05:30.268 "zone_management": false, 00:05:30.268 "zone_append": false, 00:05:30.268 "compare": false, 00:05:30.268 "compare_and_write": false, 00:05:30.268 "abort": true, 00:05:30.268 "seek_hole": false, 00:05:30.268 "seek_data": false, 00:05:30.268 "copy": true, 00:05:30.268 "nvme_iov_md": false 00:05:30.268 }, 00:05:30.268 "memory_domains": [ 00:05:30.268 { 00:05:30.268 "dma_device_id": "system", 00:05:30.268 "dma_device_type": 1 00:05:30.268 }, 00:05:30.268 { 00:05:30.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.268 "dma_device_type": 2 00:05:30.268 } 00:05:30.268 ], 00:05:30.268 "driver_specific": {} 00:05:30.268 }, 00:05:30.268 { 00:05:30.268 "name": "Passthru0", 00:05:30.268 "aliases": [ 00:05:30.268 "5887d850-20c8-54b7-8e88-d636583ffe31" 00:05:30.268 ], 00:05:30.268 "product_name": "passthru", 00:05:30.268 "block_size": 512, 00:05:30.268 "num_blocks": 16384, 00:05:30.268 "uuid": "5887d850-20c8-54b7-8e88-d636583ffe31", 00:05:30.268 "assigned_rate_limits": { 00:05:30.268 "rw_ios_per_sec": 0, 00:05:30.268 "rw_mbytes_per_sec": 0, 00:05:30.268 "r_mbytes_per_sec": 0, 00:05:30.268 "w_mbytes_per_sec": 0 00:05:30.268 }, 00:05:30.268 "claimed": false, 00:05:30.268 "zoned": false, 00:05:30.268 "supported_io_types": { 00:05:30.268 "read": true, 00:05:30.268 "write": true, 00:05:30.268 "unmap": true, 00:05:30.268 "flush": true, 00:05:30.268 "reset": true, 00:05:30.268 "nvme_admin": false, 00:05:30.268 "nvme_io": false, 00:05:30.268 "nvme_io_md": false, 00:05:30.268 "write_zeroes": true, 00:05:30.268 "zcopy": true, 00:05:30.268 "get_zone_info": false, 00:05:30.268 "zone_management": false, 00:05:30.268 "zone_append": false, 00:05:30.268 "compare": false, 00:05:30.268 "compare_and_write": false, 00:05:30.268 "abort": true, 00:05:30.268 "seek_hole": false, 00:05:30.268 "seek_data": false, 00:05:30.268 "copy": true, 00:05:30.268 "nvme_iov_md": false 00:05:30.268 }, 00:05:30.268 "memory_domains": [ 00:05:30.268 { 00:05:30.268 "dma_device_id": "system", 00:05:30.268 "dma_device_type": 1 00:05:30.268 }, 00:05:30.268 { 00:05:30.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.268 "dma_device_type": 2 00:05:30.268 } 00:05:30.268 ], 00:05:30.268 "driver_specific": { 00:05:30.268 "passthru": { 00:05:30.268 "name": "Passthru0", 00:05:30.268 "base_bdev_name": "Malloc2" 00:05:30.268 } 00:05:30.268 } 00:05:30.268 } 00:05:30.268 ]' 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.268 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.269 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.269 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.269 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.269 03:53:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.269 00:05:30.269 real 0m0.241s 00:05:30.269 user 0m0.158s 00:05:30.269 sys 0m0.031s 00:05:30.269 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.269 03:53:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.269 ************************************ 00:05:30.269 END TEST rpc_daemon_integrity 00:05:30.269 ************************************ 00:05:30.269 03:53:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:30.269 03:53:24 rpc -- rpc/rpc.sh@84 -- # killprocess 585831 00:05:30.269 03:53:24 rpc -- common/autotest_common.sh@954 -- # '[' -z 585831 ']' 00:05:30.269 03:53:24 rpc -- common/autotest_common.sh@958 -- # kill -0 585831 00:05:30.269 03:53:24 rpc -- common/autotest_common.sh@959 -- # uname 00:05:30.528 03:53:24 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.528 03:53:24 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 585831 00:05:30.528 03:53:24 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.528 03:53:24 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.528 03:53:24 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 585831' 00:05:30.528 killing process with pid 585831 00:05:30.528 03:53:24 rpc -- common/autotest_common.sh@973 -- # kill 585831 00:05:30.528 03:53:24 rpc -- common/autotest_common.sh@978 -- # wait 585831 00:05:30.787 00:05:30.787 real 0m1.938s 00:05:30.787 user 0m2.473s 00:05:30.787 sys 0m0.642s 00:05:30.787 03:53:24 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.787 03:53:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.787 ************************************ 00:05:30.787 END TEST rpc 00:05:30.787 ************************************ 00:05:30.787 03:53:25 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:30.787 03:53:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.787 03:53:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.787 03:53:25 -- common/autotest_common.sh@10 -- # set +x 00:05:30.787 ************************************ 00:05:30.787 START TEST skip_rpc 00:05:30.787 ************************************ 00:05:30.787 03:53:25 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:30.787 * Looking for test storage... 00:05:30.787 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:30.787 03:53:25 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:30.787 03:53:25 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:30.787 03:53:25 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.046 03:53:25 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.046 03:53:25 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:31.046 03:53:25 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.046 03:53:25 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.046 --rc genhtml_branch_coverage=1 00:05:31.046 --rc genhtml_function_coverage=1 00:05:31.046 --rc genhtml_legend=1 00:05:31.046 --rc geninfo_all_blocks=1 00:05:31.046 --rc geninfo_unexecuted_blocks=1 00:05:31.046 00:05:31.046 ' 00:05:31.046 03:53:25 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.046 --rc genhtml_branch_coverage=1 00:05:31.046 --rc genhtml_function_coverage=1 00:05:31.046 --rc genhtml_legend=1 00:05:31.046 --rc geninfo_all_blocks=1 00:05:31.046 --rc geninfo_unexecuted_blocks=1 00:05:31.046 00:05:31.046 ' 00:05:31.046 03:53:25 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.046 --rc genhtml_branch_coverage=1 00:05:31.046 --rc genhtml_function_coverage=1 00:05:31.046 --rc genhtml_legend=1 00:05:31.046 --rc geninfo_all_blocks=1 00:05:31.046 --rc geninfo_unexecuted_blocks=1 00:05:31.046 00:05:31.046 ' 00:05:31.046 03:53:25 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.047 --rc genhtml_branch_coverage=1 00:05:31.047 --rc genhtml_function_coverage=1 00:05:31.047 --rc genhtml_legend=1 00:05:31.047 --rc geninfo_all_blocks=1 00:05:31.047 --rc geninfo_unexecuted_blocks=1 00:05:31.047 00:05:31.047 ' 00:05:31.047 03:53:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:31.047 03:53:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:31.047 03:53:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:31.047 03:53:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.047 03:53:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.047 03:53:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.047 ************************************ 00:05:31.047 START TEST skip_rpc 00:05:31.047 ************************************ 00:05:31.047 03:53:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:31.047 03:53:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=586285 00:05:31.047 03:53:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.047 03:53:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:31.047 03:53:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:31.047 [2024-12-10 03:53:25.284828] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:31.047 [2024-12-10 03:53:25.284866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid586285 ] 00:05:31.047 [2024-12-10 03:53:25.339424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.047 [2024-12-10 03:53:25.376176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.340 03:53:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:36.340 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:36.340 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:36.340 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 586285 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 586285 ']' 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 586285 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 586285 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 586285' 00:05:36.341 killing process with pid 586285 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 586285 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 586285 00:05:36.341 00:05:36.341 real 0m5.359s 00:05:36.341 user 0m5.138s 00:05:36.341 sys 0m0.258s 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.341 03:53:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.341 ************************************ 00:05:36.341 END TEST skip_rpc 00:05:36.341 ************************************ 00:05:36.341 03:53:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:36.341 03:53:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.341 03:53:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.341 03:53:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.341 ************************************ 00:05:36.341 START TEST skip_rpc_with_json 00:05:36.341 ************************************ 00:05:36.341 03:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:36.341 03:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:36.341 03:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.341 03:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=587354 00:05:36.341 03:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.341 03:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 587354 00:05:36.341 03:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 587354 ']' 00:05:36.341 03:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.341 03:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.341 03:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.341 03:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.341 03:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.341 [2024-12-10 03:53:30.690763] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:36.341 [2024-12-10 03:53:30.690798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid587354 ] 00:05:36.677 [2024-12-10 03:53:30.747399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.677 [2024-12-10 03:53:30.787397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.677 03:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.677 03:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:36.677 03:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:36.677 03:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.677 03:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.677 [2024-12-10 03:53:30.993905] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:36.677 request: 00:05:36.677 { 00:05:36.677 "trtype": "tcp", 00:05:36.677 "method": "nvmf_get_transports", 00:05:36.677 "req_id": 1 00:05:36.677 } 00:05:36.677 Got JSON-RPC error response 00:05:36.677 response: 00:05:36.677 { 00:05:36.677 "code": -19, 00:05:36.677 "message": "No such device" 00:05:36.677 } 00:05:36.677 03:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:36.677 03:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:36.677 03:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.677 03:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.677 [2024-12-10 03:53:31.006004] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:36.677 03:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.677 03:53:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:36.677 03:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.677 03:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.959 03:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.959 03:53:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:36.959 { 00:05:36.959 "subsystems": [ 00:05:36.959 { 00:05:36.959 "subsystem": "fsdev", 00:05:36.959 "config": [ 00:05:36.959 { 00:05:36.959 "method": "fsdev_set_opts", 00:05:36.959 "params": { 00:05:36.960 "fsdev_io_pool_size": 65535, 00:05:36.960 "fsdev_io_cache_size": 256 00:05:36.960 } 00:05:36.960 } 00:05:36.960 ] 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "subsystem": "keyring", 00:05:36.960 "config": [] 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "subsystem": "iobuf", 00:05:36.960 "config": [ 00:05:36.960 { 00:05:36.960 "method": "iobuf_set_options", 00:05:36.960 "params": { 00:05:36.960 "small_pool_count": 8192, 00:05:36.960 "large_pool_count": 1024, 00:05:36.960 "small_bufsize": 8192, 00:05:36.960 "large_bufsize": 135168, 00:05:36.960 "enable_numa": false 00:05:36.960 } 00:05:36.960 } 00:05:36.960 ] 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "subsystem": "sock", 00:05:36.960 "config": [ 00:05:36.960 { 00:05:36.960 "method": "sock_set_default_impl", 00:05:36.960 "params": { 00:05:36.960 "impl_name": "posix" 00:05:36.960 } 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "method": "sock_impl_set_options", 00:05:36.960 "params": { 00:05:36.960 "impl_name": "ssl", 00:05:36.960 "recv_buf_size": 4096, 00:05:36.960 "send_buf_size": 4096, 00:05:36.960 "enable_recv_pipe": true, 00:05:36.960 "enable_quickack": false, 00:05:36.960 "enable_placement_id": 0, 00:05:36.960 "enable_zerocopy_send_server": true, 00:05:36.960 "enable_zerocopy_send_client": false, 00:05:36.960 "zerocopy_threshold": 0, 00:05:36.960 "tls_version": 0, 00:05:36.960 "enable_ktls": false 00:05:36.960 } 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "method": "sock_impl_set_options", 00:05:36.960 "params": { 00:05:36.960 "impl_name": "posix", 00:05:36.960 "recv_buf_size": 2097152, 00:05:36.960 "send_buf_size": 2097152, 00:05:36.960 "enable_recv_pipe": true, 00:05:36.960 "enable_quickack": false, 00:05:36.960 "enable_placement_id": 0, 00:05:36.960 "enable_zerocopy_send_server": true, 00:05:36.960 "enable_zerocopy_send_client": false, 00:05:36.960 "zerocopy_threshold": 0, 00:05:36.960 "tls_version": 0, 00:05:36.960 "enable_ktls": false 00:05:36.960 } 00:05:36.960 } 00:05:36.960 ] 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "subsystem": "vmd", 00:05:36.960 "config": [] 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "subsystem": "accel", 00:05:36.960 "config": [ 00:05:36.960 { 00:05:36.960 "method": "accel_set_options", 00:05:36.960 "params": { 00:05:36.960 "small_cache_size": 128, 00:05:36.960 "large_cache_size": 16, 00:05:36.960 "task_count": 2048, 00:05:36.960 "sequence_count": 2048, 00:05:36.960 "buf_count": 2048 00:05:36.960 } 00:05:36.960 } 00:05:36.960 ] 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "subsystem": "bdev", 00:05:36.960 "config": [ 00:05:36.960 { 00:05:36.960 "method": "bdev_set_options", 00:05:36.960 "params": { 00:05:36.960 "bdev_io_pool_size": 65535, 00:05:36.960 "bdev_io_cache_size": 256, 00:05:36.960 "bdev_auto_examine": true, 00:05:36.960 "iobuf_small_cache_size": 128, 00:05:36.960 "iobuf_large_cache_size": 16 00:05:36.960 } 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "method": "bdev_raid_set_options", 00:05:36.960 "params": { 00:05:36.960 "process_window_size_kb": 1024, 00:05:36.960 "process_max_bandwidth_mb_sec": 0 00:05:36.960 } 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "method": "bdev_iscsi_set_options", 00:05:36.960 "params": { 00:05:36.960 "timeout_sec": 30 00:05:36.960 } 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "method": "bdev_nvme_set_options", 00:05:36.960 "params": { 00:05:36.960 "action_on_timeout": "none", 00:05:36.960 "timeout_us": 0, 00:05:36.960 "timeout_admin_us": 0, 00:05:36.960 "keep_alive_timeout_ms": 10000, 00:05:36.960 "arbitration_burst": 0, 00:05:36.960 "low_priority_weight": 0, 00:05:36.960 "medium_priority_weight": 0, 00:05:36.960 "high_priority_weight": 0, 00:05:36.960 "nvme_adminq_poll_period_us": 10000, 00:05:36.960 "nvme_ioq_poll_period_us": 0, 00:05:36.960 "io_queue_requests": 0, 00:05:36.960 "delay_cmd_submit": true, 00:05:36.960 "transport_retry_count": 4, 00:05:36.960 "bdev_retry_count": 3, 00:05:36.960 "transport_ack_timeout": 0, 00:05:36.960 "ctrlr_loss_timeout_sec": 0, 00:05:36.960 "reconnect_delay_sec": 0, 00:05:36.960 "fast_io_fail_timeout_sec": 0, 00:05:36.960 "disable_auto_failback": false, 00:05:36.960 "generate_uuids": false, 00:05:36.960 "transport_tos": 0, 00:05:36.960 "nvme_error_stat": false, 00:05:36.960 "rdma_srq_size": 0, 00:05:36.960 "io_path_stat": false, 00:05:36.960 "allow_accel_sequence": false, 00:05:36.960 "rdma_max_cq_size": 0, 00:05:36.960 "rdma_cm_event_timeout_ms": 0, 00:05:36.960 "dhchap_digests": [ 00:05:36.960 "sha256", 00:05:36.960 "sha384", 00:05:36.960 "sha512" 00:05:36.960 ], 00:05:36.960 "dhchap_dhgroups": [ 00:05:36.960 "null", 00:05:36.960 "ffdhe2048", 00:05:36.960 "ffdhe3072", 00:05:36.960 "ffdhe4096", 00:05:36.960 "ffdhe6144", 00:05:36.960 "ffdhe8192" 00:05:36.960 ] 00:05:36.960 } 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "method": "bdev_nvme_set_hotplug", 00:05:36.960 "params": { 00:05:36.960 "period_us": 100000, 00:05:36.960 "enable": false 00:05:36.960 } 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "method": "bdev_wait_for_examine" 00:05:36.960 } 00:05:36.960 ] 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "subsystem": "scsi", 00:05:36.960 "config": null 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "subsystem": "scheduler", 00:05:36.960 "config": [ 00:05:36.960 { 00:05:36.960 "method": "framework_set_scheduler", 00:05:36.960 "params": { 00:05:36.960 "name": "static" 00:05:36.960 } 00:05:36.960 } 00:05:36.960 ] 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "subsystem": "vhost_scsi", 00:05:36.960 "config": [] 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "subsystem": "vhost_blk", 00:05:36.960 "config": [] 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "subsystem": "ublk", 00:05:36.960 "config": [] 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "subsystem": "nbd", 00:05:36.960 "config": [] 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "subsystem": "nvmf", 00:05:36.960 "config": [ 00:05:36.960 { 00:05:36.960 "method": "nvmf_set_config", 00:05:36.960 "params": { 00:05:36.960 "discovery_filter": "match_any", 00:05:36.960 "admin_cmd_passthru": { 00:05:36.960 "identify_ctrlr": false 00:05:36.960 }, 00:05:36.960 "dhchap_digests": [ 00:05:36.960 "sha256", 00:05:36.960 "sha384", 00:05:36.960 "sha512" 00:05:36.960 ], 00:05:36.960 "dhchap_dhgroups": [ 00:05:36.960 "null", 00:05:36.960 "ffdhe2048", 00:05:36.960 "ffdhe3072", 00:05:36.960 "ffdhe4096", 00:05:36.960 "ffdhe6144", 00:05:36.960 "ffdhe8192" 00:05:36.960 ] 00:05:36.960 } 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "method": "nvmf_set_max_subsystems", 00:05:36.960 "params": { 00:05:36.960 "max_subsystems": 1024 00:05:36.960 } 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "method": "nvmf_set_crdt", 00:05:36.960 "params": { 00:05:36.960 "crdt1": 0, 00:05:36.960 "crdt2": 0, 00:05:36.960 "crdt3": 0 00:05:36.960 } 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "method": "nvmf_create_transport", 00:05:36.960 "params": { 00:05:36.960 "trtype": "TCP", 00:05:36.960 "max_queue_depth": 128, 00:05:36.960 "max_io_qpairs_per_ctrlr": 127, 00:05:36.960 "in_capsule_data_size": 4096, 00:05:36.960 "max_io_size": 131072, 00:05:36.960 "io_unit_size": 131072, 00:05:36.960 "max_aq_depth": 128, 00:05:36.960 "num_shared_buffers": 511, 00:05:36.960 "buf_cache_size": 4294967295, 00:05:36.960 "dif_insert_or_strip": false, 00:05:36.960 "zcopy": false, 00:05:36.960 "c2h_success": true, 00:05:36.960 "sock_priority": 0, 00:05:36.960 "abort_timeout_sec": 1, 00:05:36.960 "ack_timeout": 0, 00:05:36.960 "data_wr_pool_size": 0 00:05:36.960 } 00:05:36.960 } 00:05:36.960 ] 00:05:36.960 }, 00:05:36.960 { 00:05:36.960 "subsystem": "iscsi", 00:05:36.960 "config": [ 00:05:36.960 { 00:05:36.960 "method": "iscsi_set_options", 00:05:36.960 "params": { 00:05:36.960 "node_base": "iqn.2016-06.io.spdk", 00:05:36.960 "max_sessions": 128, 00:05:36.960 "max_connections_per_session": 2, 00:05:36.960 "max_queue_depth": 64, 00:05:36.960 "default_time2wait": 2, 00:05:36.960 "default_time2retain": 20, 00:05:36.960 "first_burst_length": 8192, 00:05:36.960 "immediate_data": true, 00:05:36.960 "allow_duplicated_isid": false, 00:05:36.960 "error_recovery_level": 0, 00:05:36.960 "nop_timeout": 60, 00:05:36.960 "nop_in_interval": 30, 00:05:36.960 "disable_chap": false, 00:05:36.960 "require_chap": false, 00:05:36.960 "mutual_chap": false, 00:05:36.960 "chap_group": 0, 00:05:36.960 "max_large_datain_per_connection": 64, 00:05:36.960 "max_r2t_per_connection": 4, 00:05:36.960 "pdu_pool_size": 36864, 00:05:36.960 "immediate_data_pool_size": 16384, 00:05:36.960 "data_out_pool_size": 2048 00:05:36.960 } 00:05:36.960 } 00:05:36.960 ] 00:05:36.960 } 00:05:36.960 ] 00:05:36.960 } 00:05:36.960 03:53:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:36.960 03:53:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 587354 00:05:36.961 03:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 587354 ']' 00:05:36.961 03:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 587354 00:05:36.961 03:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:36.961 03:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.961 03:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587354 00:05:36.961 03:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.961 03:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.961 03:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587354' 00:05:36.961 killing process with pid 587354 00:05:36.961 03:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 587354 00:05:36.961 03:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 587354 00:05:37.219 03:53:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=587572 00:05:37.219 03:53:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:37.219 03:53:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:42.490 03:53:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 587572 00:05:42.490 03:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 587572 ']' 00:05:42.490 03:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 587572 00:05:42.490 03:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:42.490 03:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.490 03:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587572 00:05:42.490 03:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.490 03:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.490 03:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587572' 00:05:42.490 killing process with pid 587572 00:05:42.490 03:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 587572 00:05:42.490 03:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 587572 00:05:42.490 03:53:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:42.490 03:53:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:42.750 00:05:42.750 real 0m6.215s 00:05:42.750 user 0m5.954s 00:05:42.750 sys 0m0.535s 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.750 ************************************ 00:05:42.750 END TEST skip_rpc_with_json 00:05:42.750 ************************************ 00:05:42.750 03:53:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:42.750 03:53:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.750 03:53:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.750 03:53:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.750 ************************************ 00:05:42.750 START TEST skip_rpc_with_delay 00:05:42.750 ************************************ 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:42.750 [2024-12-10 03:53:36.976494] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:42.750 00:05:42.750 real 0m0.047s 00:05:42.750 user 0m0.023s 00:05:42.750 sys 0m0.023s 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.750 03:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:42.750 ************************************ 00:05:42.750 END TEST skip_rpc_with_delay 00:05:42.750 ************************************ 00:05:42.750 03:53:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:42.750 03:53:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:42.750 03:53:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:42.750 03:53:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.750 03:53:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.750 03:53:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.750 ************************************ 00:05:42.750 START TEST exit_on_failed_rpc_init 00:05:42.750 ************************************ 00:05:42.750 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:42.750 03:53:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.750 03:53:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=588557 00:05:42.750 03:53:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 588557 00:05:42.750 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 588557 ']' 00:05:42.750 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.750 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.750 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.750 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.750 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:42.750 [2024-12-10 03:53:37.092067] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:42.750 [2024-12-10 03:53:37.092105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588557 ] 00:05:43.009 [2024-12-10 03:53:37.144500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.009 [2024-12-10 03:53:37.183842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.009 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.009 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:43.009 03:53:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.009 03:53:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.009 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:43.009 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.009 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.009 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.009 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.268 [2024-12-10 03:53:37.446530] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:43.268 [2024-12-10 03:53:37.446575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588736 ] 00:05:43.268 [2024-12-10 03:53:37.504015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.268 [2024-12-10 03:53:37.546036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.268 [2024-12-10 03:53:37.546090] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:43.268 [2024-12-10 03:53:37.546098] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:43.268 [2024-12-10 03:53:37.546104] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 588557 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 588557 ']' 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 588557 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 588557 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 588557' 00:05:43.268 killing process with pid 588557 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 588557 00:05:43.268 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 588557 00:05:43.837 00:05:43.837 real 0m0.880s 00:05:43.837 user 0m0.936s 00:05:43.837 sys 0m0.341s 00:05:43.837 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.837 03:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.837 ************************************ 00:05:43.837 END TEST exit_on_failed_rpc_init 00:05:43.837 ************************************ 00:05:43.837 03:53:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:43.837 00:05:43.837 real 0m12.928s 00:05:43.837 user 0m12.253s 00:05:43.837 sys 0m1.413s 00:05:43.837 03:53:37 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.837 03:53:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.837 ************************************ 00:05:43.837 END TEST skip_rpc 00:05:43.837 ************************************ 00:05:43.837 03:53:38 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:43.837 03:53:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.837 03:53:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.837 03:53:38 -- common/autotest_common.sh@10 -- # set +x 00:05:43.837 ************************************ 00:05:43.837 START TEST rpc_client 00:05:43.837 ************************************ 00:05:43.837 03:53:38 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:43.837 * Looking for test storage... 00:05:43.837 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:43.837 03:53:38 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:43.837 03:53:38 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:43.837 03:53:38 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:43.837 03:53:38 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.837 03:53:38 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:43.837 03:53:38 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.837 03:53:38 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:43.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.837 --rc genhtml_branch_coverage=1 00:05:43.837 --rc genhtml_function_coverage=1 00:05:43.837 --rc genhtml_legend=1 00:05:43.837 --rc geninfo_all_blocks=1 00:05:43.837 --rc geninfo_unexecuted_blocks=1 00:05:43.837 00:05:43.837 ' 00:05:43.837 03:53:38 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:43.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.837 --rc genhtml_branch_coverage=1 00:05:43.837 --rc genhtml_function_coverage=1 00:05:43.837 --rc genhtml_legend=1 00:05:43.837 --rc geninfo_all_blocks=1 00:05:43.837 --rc geninfo_unexecuted_blocks=1 00:05:43.837 00:05:43.837 ' 00:05:43.837 03:53:38 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:43.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.837 --rc genhtml_branch_coverage=1 00:05:43.837 --rc genhtml_function_coverage=1 00:05:43.837 --rc genhtml_legend=1 00:05:43.837 --rc geninfo_all_blocks=1 00:05:43.837 --rc geninfo_unexecuted_blocks=1 00:05:43.837 00:05:43.837 ' 00:05:43.837 03:53:38 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:43.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.837 --rc genhtml_branch_coverage=1 00:05:43.837 --rc genhtml_function_coverage=1 00:05:43.837 --rc genhtml_legend=1 00:05:43.837 --rc geninfo_all_blocks=1 00:05:43.837 --rc geninfo_unexecuted_blocks=1 00:05:43.837 00:05:43.837 ' 00:05:43.837 03:53:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:43.837 OK 00:05:43.837 03:53:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:43.837 00:05:43.837 real 0m0.178s 00:05:43.837 user 0m0.105s 00:05:43.837 sys 0m0.086s 00:05:43.837 03:53:38 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.837 03:53:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:43.837 ************************************ 00:05:43.837 END TEST rpc_client 00:05:43.837 ************************************ 00:05:44.098 03:53:38 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:44.098 03:53:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.098 03:53:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.098 03:53:38 -- common/autotest_common.sh@10 -- # set +x 00:05:44.098 ************************************ 00:05:44.098 START TEST json_config 00:05:44.098 ************************************ 00:05:44.098 03:53:38 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:44.098 03:53:38 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.098 03:53:38 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.098 03:53:38 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:44.098 03:53:38 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:44.098 03:53:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.098 03:53:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.098 03:53:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.098 03:53:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.098 03:53:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.098 03:53:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.098 03:53:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.098 03:53:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.098 03:53:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.098 03:53:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.098 03:53:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.098 03:53:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:44.098 03:53:38 json_config -- scripts/common.sh@345 -- # : 1 00:05:44.098 03:53:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.098 03:53:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.098 03:53:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:44.098 03:53:38 json_config -- scripts/common.sh@353 -- # local d=1 00:05:44.098 03:53:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.098 03:53:38 json_config -- scripts/common.sh@355 -- # echo 1 00:05:44.098 03:53:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.098 03:53:38 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:44.098 03:53:38 json_config -- scripts/common.sh@353 -- # local d=2 00:05:44.098 03:53:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.098 03:53:38 json_config -- scripts/common.sh@355 -- # echo 2 00:05:44.098 03:53:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.098 03:53:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.098 03:53:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.098 03:53:38 json_config -- scripts/common.sh@368 -- # return 0 00:05:44.098 03:53:38 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.098 03:53:38 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:44.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.098 --rc genhtml_branch_coverage=1 00:05:44.098 --rc genhtml_function_coverage=1 00:05:44.098 --rc genhtml_legend=1 00:05:44.098 --rc geninfo_all_blocks=1 00:05:44.098 --rc geninfo_unexecuted_blocks=1 00:05:44.098 00:05:44.098 ' 00:05:44.098 03:53:38 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:44.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.098 --rc genhtml_branch_coverage=1 00:05:44.098 --rc genhtml_function_coverage=1 00:05:44.098 --rc genhtml_legend=1 00:05:44.098 --rc geninfo_all_blocks=1 00:05:44.098 --rc geninfo_unexecuted_blocks=1 00:05:44.098 00:05:44.098 ' 00:05:44.098 03:53:38 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:44.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.098 --rc genhtml_branch_coverage=1 00:05:44.098 --rc genhtml_function_coverage=1 00:05:44.098 --rc genhtml_legend=1 00:05:44.098 --rc geninfo_all_blocks=1 00:05:44.098 --rc geninfo_unexecuted_blocks=1 00:05:44.098 00:05:44.098 ' 00:05:44.098 03:53:38 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:44.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.098 --rc genhtml_branch_coverage=1 00:05:44.098 --rc genhtml_function_coverage=1 00:05:44.098 --rc genhtml_legend=1 00:05:44.098 --rc geninfo_all_blocks=1 00:05:44.098 --rc geninfo_unexecuted_blocks=1 00:05:44.098 00:05:44.098 ' 00:05:44.098 03:53:38 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:44.098 03:53:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:44.098 03:53:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.098 03:53:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.098 03:53:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.098 03:53:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.098 03:53:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.098 03:53:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.098 03:53:38 json_config -- paths/export.sh@5 -- # export PATH 00:05:44.098 03:53:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@51 -- # : 0 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:44.098 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:44.098 03:53:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:44.099 INFO: JSON configuration test init 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:44.099 03:53:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.099 03:53:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:44.099 03:53:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.099 03:53:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.099 03:53:38 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:44.099 03:53:38 json_config -- json_config/common.sh@9 -- # local app=target 00:05:44.099 03:53:38 json_config -- json_config/common.sh@10 -- # shift 00:05:44.099 03:53:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:44.099 03:53:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:44.099 03:53:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:44.099 03:53:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.099 03:53:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.099 03:53:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=588931 00:05:44.099 03:53:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:44.099 Waiting for target to run... 00:05:44.099 03:53:38 json_config -- json_config/common.sh@25 -- # waitforlisten 588931 /var/tmp/spdk_tgt.sock 00:05:44.099 03:53:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:44.099 03:53:38 json_config -- common/autotest_common.sh@835 -- # '[' -z 588931 ']' 00:05:44.099 03:53:38 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:44.099 03:53:38 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.099 03:53:38 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:44.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:44.099 03:53:38 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.099 03:53:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.358 [2024-12-10 03:53:38.508198] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:05:44.358 [2024-12-10 03:53:38.508244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588931 ] 00:05:44.617 [2024-12-10 03:53:38.923478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.617 [2024-12-10 03:53:38.974250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.184 03:53:39 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.184 03:53:39 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:45.184 03:53:39 json_config -- json_config/common.sh@26 -- # echo '' 00:05:45.184 00:05:45.184 03:53:39 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:45.184 03:53:39 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:45.184 03:53:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.184 03:53:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.184 03:53:39 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:45.184 03:53:39 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:45.184 03:53:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:45.184 03:53:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.184 03:53:39 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:45.184 03:53:39 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:45.184 03:53:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:48.475 03:53:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.475 03:53:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:48.475 03:53:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@54 -- # sort 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:48.475 03:53:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:48.475 03:53:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:48.475 03:53:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.475 03:53:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:05:48.475 03:53:42 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:05:48.475 03:53:42 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:48.475 03:53:42 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:48.475 03:53:42 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:48.475 03:53:42 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:48.475 03:53:42 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:48.475 03:53:42 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:48.475 03:53:42 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:48.475 03:53:42 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:48.475 03:53:42 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:05:48.475 03:53:42 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:48.475 03:53:42 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:05:48.475 03:53:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@320 -- # e810=() 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@321 -- # x722=() 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@322 -- # mlx=() 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:05:55.040 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:05:55.040 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:55.040 Found net devices under 0000:18:00.0: mlx_0_0 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:55.040 Found net devices under 0000:18:00.1: mlx_0_1 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@62 -- # uname 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:55.040 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:55.040 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:05:55.040 altname enp24s0f0np0 00:05:55.040 altname ens785f0np0 00:05:55.040 inet 192.168.100.8/24 scope global mlx_0_0 00:05:55.040 valid_lft forever preferred_lft forever 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:55.040 03:53:48 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:55.041 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:55.041 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:05:55.041 altname enp24s0f1np1 00:05:55.041 altname ens785f1np1 00:05:55.041 inet 192.168.100.9/24 scope global mlx_0_1 00:05:55.041 valid_lft forever preferred_lft forever 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@450 -- # return 0 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:55.041 192.168.100.9' 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:55.041 192.168.100.9' 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@485 -- # head -n 1 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:55.041 192.168.100.9' 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@486 -- # head -n 1 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:55.041 03:53:48 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:55.041 03:53:48 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:05:55.041 03:53:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:55.041 03:53:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:55.041 MallocForNvmf0 00:05:55.041 03:53:48 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:55.041 03:53:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:55.041 MallocForNvmf1 00:05:55.041 03:53:48 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:55.041 03:53:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:55.041 [2024-12-10 03:53:49.001011] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:55.041 [2024-12-10 03:53:49.030084] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a3c890/0x19112c0) succeed. 00:05:55.041 [2024-12-10 03:53:49.041696] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a3b8d0/0x1990f80) succeed. 00:05:55.041 03:53:49 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:55.041 03:53:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:55.041 03:53:49 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:55.041 03:53:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:55.041 03:53:49 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:55.041 03:53:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:55.300 03:53:49 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:55.300 03:53:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:55.559 [2024-12-10 03:53:49.739058] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:55.559 03:53:49 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:55.559 03:53:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:55.559 03:53:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.559 03:53:49 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:55.559 03:53:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:55.559 03:53:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.559 03:53:49 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:55.559 03:53:49 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:55.559 03:53:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:55.818 MallocBdevForConfigChangeCheck 00:05:55.818 03:53:49 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:55.818 03:53:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:55.818 03:53:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.818 03:53:50 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:55.818 03:53:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.077 03:53:50 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:56.077 INFO: shutting down applications... 00:05:56.077 03:53:50 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:56.077 03:53:50 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:56.077 03:53:50 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:56.077 03:53:50 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:00.266 Calling clear_iscsi_subsystem 00:06:00.266 Calling clear_nvmf_subsystem 00:06:00.266 Calling clear_nbd_subsystem 00:06:00.266 Calling clear_ublk_subsystem 00:06:00.266 Calling clear_vhost_blk_subsystem 00:06:00.266 Calling clear_vhost_scsi_subsystem 00:06:00.266 Calling clear_bdev_subsystem 00:06:00.266 03:53:54 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:00.266 03:53:54 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:00.266 03:53:54 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:00.266 03:53:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:00.266 03:53:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:00.266 03:53:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:00.266 03:53:54 json_config -- json_config/json_config.sh@352 -- # break 00:06:00.266 03:53:54 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:00.266 03:53:54 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:00.266 03:53:54 json_config -- json_config/common.sh@31 -- # local app=target 00:06:00.266 03:53:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:00.266 03:53:54 json_config -- json_config/common.sh@35 -- # [[ -n 588931 ]] 00:06:00.266 03:53:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 588931 00:06:00.266 03:53:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:00.266 03:53:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.267 03:53:54 json_config -- json_config/common.sh@41 -- # kill -0 588931 00:06:00.267 03:53:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:00.836 03:53:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:00.836 03:53:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.836 03:53:55 json_config -- json_config/common.sh@41 -- # kill -0 588931 00:06:00.836 03:53:55 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:00.836 03:53:55 json_config -- json_config/common.sh@43 -- # break 00:06:00.836 03:53:55 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:00.837 03:53:55 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:00.837 SPDK target shutdown done 00:06:00.837 03:53:55 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:00.837 INFO: relaunching applications... 00:06:00.837 03:53:55 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.837 03:53:55 json_config -- json_config/common.sh@9 -- # local app=target 00:06:00.837 03:53:55 json_config -- json_config/common.sh@10 -- # shift 00:06:00.837 03:53:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:00.837 03:53:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:00.837 03:53:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:00.837 03:53:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.837 03:53:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.837 03:53:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=594294 00:06:00.837 03:53:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:00.837 Waiting for target to run... 00:06:00.837 03:53:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.837 03:53:55 json_config -- json_config/common.sh@25 -- # waitforlisten 594294 /var/tmp/spdk_tgt.sock 00:06:00.837 03:53:55 json_config -- common/autotest_common.sh@835 -- # '[' -z 594294 ']' 00:06:00.837 03:53:55 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:00.838 03:53:55 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.838 03:53:55 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:00.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:00.838 03:53:55 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.838 03:53:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.838 [2024-12-10 03:53:55.118025] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:00.838 [2024-12-10 03:53:55.118077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid594294 ] 00:06:01.407 [2024-12-10 03:53:55.550375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.408 [2024-12-10 03:53:55.608787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.697 [2024-12-10 03:53:58.661605] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xad5330/0xae1e40) succeed. 00:06:04.697 [2024-12-10 03:53:58.671591] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xad8580/0xb61e80) succeed. 00:06:04.697 [2024-12-10 03:53:58.719555] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:04.955 03:53:59 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.955 03:53:59 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:04.955 03:53:59 json_config -- json_config/common.sh@26 -- # echo '' 00:06:04.955 00:06:04.955 03:53:59 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:04.955 03:53:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:04.955 INFO: Checking if target configuration is the same... 00:06:04.955 03:53:59 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.955 03:53:59 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:04.955 03:53:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.955 + '[' 2 -ne 2 ']' 00:06:04.956 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:04.956 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:04.956 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:04.956 +++ basename /dev/fd/62 00:06:04.956 ++ mktemp /tmp/62.XXX 00:06:04.956 + tmp_file_1=/tmp/62.Zbq 00:06:04.956 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.956 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:04.956 + tmp_file_2=/tmp/spdk_tgt_config.json.Q36 00:06:04.956 + ret=0 00:06:04.956 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:05.214 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:05.472 + diff -u /tmp/62.Zbq /tmp/spdk_tgt_config.json.Q36 00:06:05.472 + echo 'INFO: JSON config files are the same' 00:06:05.472 INFO: JSON config files are the same 00:06:05.472 + rm /tmp/62.Zbq /tmp/spdk_tgt_config.json.Q36 00:06:05.472 + exit 0 00:06:05.472 03:53:59 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:05.472 03:53:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:05.472 INFO: changing configuration and checking if this can be detected... 00:06:05.472 03:53:59 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:05.472 03:53:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:05.473 03:53:59 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.473 03:53:59 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:05.473 03:53:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.473 + '[' 2 -ne 2 ']' 00:06:05.473 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:05.473 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:05.473 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:05.473 +++ basename /dev/fd/62 00:06:05.473 ++ mktemp /tmp/62.XXX 00:06:05.473 + tmp_file_1=/tmp/62.W7D 00:06:05.473 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.473 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:05.473 + tmp_file_2=/tmp/spdk_tgt_config.json.bWT 00:06:05.473 + ret=0 00:06:05.473 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:05.730 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:05.989 + diff -u /tmp/62.W7D /tmp/spdk_tgt_config.json.bWT 00:06:05.989 + ret=1 00:06:05.989 + echo '=== Start of file: /tmp/62.W7D ===' 00:06:05.989 + cat /tmp/62.W7D 00:06:05.989 + echo '=== End of file: /tmp/62.W7D ===' 00:06:05.989 + echo '' 00:06:05.989 + echo '=== Start of file: /tmp/spdk_tgt_config.json.bWT ===' 00:06:05.989 + cat /tmp/spdk_tgt_config.json.bWT 00:06:05.989 + echo '=== End of file: /tmp/spdk_tgt_config.json.bWT ===' 00:06:05.989 + echo '' 00:06:05.989 + rm /tmp/62.W7D /tmp/spdk_tgt_config.json.bWT 00:06:05.989 + exit 1 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:05.989 INFO: configuration change detected. 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@324 -- # [[ -n 594294 ]] 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.989 03:54:00 json_config -- json_config/json_config.sh@330 -- # killprocess 594294 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@954 -- # '[' -z 594294 ']' 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@958 -- # kill -0 594294 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@959 -- # uname 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 594294 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 594294' 00:06:05.989 killing process with pid 594294 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@973 -- # kill 594294 00:06:05.989 03:54:00 json_config -- common/autotest_common.sh@978 -- # wait 594294 00:06:10.181 03:54:04 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.181 03:54:04 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:10.181 03:54:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.181 03:54:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.181 03:54:04 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:10.181 03:54:04 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:10.181 INFO: Success 00:06:10.181 03:54:04 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:10.181 03:54:04 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:10.181 03:54:04 json_config -- nvmf/common.sh@121 -- # sync 00:06:10.181 03:54:04 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:06:10.181 03:54:04 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:06:10.181 03:54:04 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:06:10.181 03:54:04 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:10.181 03:54:04 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:06:10.181 00:06:10.181 real 0m25.903s 00:06:10.181 user 0m27.499s 00:06:10.181 sys 0m6.973s 00:06:10.181 03:54:04 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.181 03:54:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.181 ************************************ 00:06:10.181 END TEST json_config 00:06:10.181 ************************************ 00:06:10.181 03:54:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.181 03:54:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.181 03:54:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.181 03:54:04 -- common/autotest_common.sh@10 -- # set +x 00:06:10.181 ************************************ 00:06:10.181 START TEST json_config_extra_key 00:06:10.181 ************************************ 00:06:10.181 03:54:04 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.181 03:54:04 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.181 03:54:04 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.181 03:54:04 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.181 03:54:04 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:10.181 03:54:04 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.181 03:54:04 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.181 --rc genhtml_branch_coverage=1 00:06:10.181 --rc genhtml_function_coverage=1 00:06:10.181 --rc genhtml_legend=1 00:06:10.181 --rc geninfo_all_blocks=1 00:06:10.181 --rc geninfo_unexecuted_blocks=1 00:06:10.181 00:06:10.181 ' 00:06:10.181 03:54:04 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.181 --rc genhtml_branch_coverage=1 00:06:10.181 --rc genhtml_function_coverage=1 00:06:10.181 --rc genhtml_legend=1 00:06:10.181 --rc geninfo_all_blocks=1 00:06:10.181 --rc geninfo_unexecuted_blocks=1 00:06:10.181 00:06:10.181 ' 00:06:10.181 03:54:04 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.181 --rc genhtml_branch_coverage=1 00:06:10.181 --rc genhtml_function_coverage=1 00:06:10.181 --rc genhtml_legend=1 00:06:10.181 --rc geninfo_all_blocks=1 00:06:10.181 --rc geninfo_unexecuted_blocks=1 00:06:10.181 00:06:10.181 ' 00:06:10.181 03:54:04 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.181 --rc genhtml_branch_coverage=1 00:06:10.181 --rc genhtml_function_coverage=1 00:06:10.181 --rc genhtml_legend=1 00:06:10.181 --rc geninfo_all_blocks=1 00:06:10.181 --rc geninfo_unexecuted_blocks=1 00:06:10.181 00:06:10.181 ' 00:06:10.181 03:54:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.181 03:54:04 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.181 03:54:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.181 03:54:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.181 03:54:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.181 03:54:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:10.181 03:54:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.181 03:54:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.182 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.182 03:54:04 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.182 03:54:04 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.182 03:54:04 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.182 03:54:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:10.182 03:54:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:10.182 03:54:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:10.182 03:54:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:10.182 03:54:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:10.182 03:54:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:10.182 03:54:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:10.182 03:54:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:10.182 03:54:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:10.182 03:54:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.182 03:54:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:10.182 INFO: launching applications... 00:06:10.182 03:54:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.182 03:54:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:10.182 03:54:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:10.182 03:54:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.182 03:54:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.182 03:54:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.182 03:54:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.182 03:54:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.182 03:54:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=596408 00:06:10.182 03:54:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.182 Waiting for target to run... 00:06:10.182 03:54:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 596408 /var/tmp/spdk_tgt.sock 00:06:10.182 03:54:04 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 596408 ']' 00:06:10.182 03:54:04 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.182 03:54:04 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.182 03:54:04 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.182 03:54:04 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.182 03:54:04 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.182 03:54:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.182 [2024-12-10 03:54:04.444024] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:10.182 [2024-12-10 03:54:04.444076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596408 ] 00:06:10.441 [2024-12-10 03:54:04.707564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.441 [2024-12-10 03:54:04.738891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.008 03:54:05 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.008 03:54:05 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:11.008 03:54:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:11.008 00:06:11.008 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:11.008 INFO: shutting down applications... 00:06:11.008 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:11.008 03:54:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:11.008 03:54:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.008 03:54:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 596408 ]] 00:06:11.008 03:54:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 596408 00:06:11.008 03:54:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.008 03:54:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.008 03:54:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 596408 00:06:11.008 03:54:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.577 03:54:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.577 03:54:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.577 03:54:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 596408 00:06:11.577 03:54:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:11.577 03:54:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:11.577 03:54:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:11.577 03:54:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:11.577 SPDK target shutdown done 00:06:11.577 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:11.577 Success 00:06:11.577 00:06:11.577 real 0m1.515s 00:06:11.577 user 0m1.285s 00:06:11.577 sys 0m0.372s 00:06:11.577 03:54:05 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.577 03:54:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.577 ************************************ 00:06:11.577 END TEST json_config_extra_key 00:06:11.577 ************************************ 00:06:11.577 03:54:05 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.577 03:54:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.577 03:54:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.577 03:54:05 -- common/autotest_common.sh@10 -- # set +x 00:06:11.577 ************************************ 00:06:11.577 START TEST alias_rpc 00:06:11.577 ************************************ 00:06:11.577 03:54:05 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.577 * Looking for test storage... 00:06:11.577 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:11.577 03:54:05 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:11.577 03:54:05 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:11.577 03:54:05 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:11.577 03:54:05 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:11.577 03:54:05 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.577 03:54:05 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.577 03:54:05 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.577 03:54:05 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.577 03:54:05 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.577 03:54:05 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.577 03:54:05 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.577 03:54:05 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.577 03:54:05 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.577 03:54:05 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.577 03:54:05 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.577 03:54:05 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:11.577 03:54:05 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.578 03:54:05 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:11.578 03:54:05 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.578 03:54:05 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:11.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.578 --rc genhtml_branch_coverage=1 00:06:11.578 --rc genhtml_function_coverage=1 00:06:11.578 --rc genhtml_legend=1 00:06:11.578 --rc geninfo_all_blocks=1 00:06:11.578 --rc geninfo_unexecuted_blocks=1 00:06:11.578 00:06:11.578 ' 00:06:11.578 03:54:05 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:11.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.578 --rc genhtml_branch_coverage=1 00:06:11.578 --rc genhtml_function_coverage=1 00:06:11.578 --rc genhtml_legend=1 00:06:11.578 --rc geninfo_all_blocks=1 00:06:11.578 --rc geninfo_unexecuted_blocks=1 00:06:11.578 00:06:11.578 ' 00:06:11.578 03:54:05 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:11.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.578 --rc genhtml_branch_coverage=1 00:06:11.578 --rc genhtml_function_coverage=1 00:06:11.578 --rc genhtml_legend=1 00:06:11.578 --rc geninfo_all_blocks=1 00:06:11.578 --rc geninfo_unexecuted_blocks=1 00:06:11.578 00:06:11.578 ' 00:06:11.578 03:54:05 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:11.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.578 --rc genhtml_branch_coverage=1 00:06:11.578 --rc genhtml_function_coverage=1 00:06:11.578 --rc genhtml_legend=1 00:06:11.578 --rc geninfo_all_blocks=1 00:06:11.578 --rc geninfo_unexecuted_blocks=1 00:06:11.578 00:06:11.578 ' 00:06:11.578 03:54:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.578 03:54:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=597123 00:06:11.578 03:54:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 597123 00:06:11.578 03:54:05 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 597123 ']' 00:06:11.578 03:54:05 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.578 03:54:05 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.578 03:54:05 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.578 03:54:05 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.578 03:54:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.578 03:54:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.837 [2024-12-10 03:54:05.988107] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:11.837 [2024-12-10 03:54:05.988157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597123 ] 00:06:11.837 [2024-12-10 03:54:06.045459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.837 [2024-12-10 03:54:06.084488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.096 03:54:06 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.096 03:54:06 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:12.096 03:54:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:12.096 03:54:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 597123 00:06:12.096 03:54:06 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 597123 ']' 00:06:12.096 03:54:06 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 597123 00:06:12.096 03:54:06 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:12.355 03:54:06 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.355 03:54:06 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597123 00:06:12.355 03:54:06 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.355 03:54:06 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.355 03:54:06 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597123' 00:06:12.355 killing process with pid 597123 00:06:12.355 03:54:06 alias_rpc -- common/autotest_common.sh@973 -- # kill 597123 00:06:12.355 03:54:06 alias_rpc -- common/autotest_common.sh@978 -- # wait 597123 00:06:12.613 00:06:12.614 real 0m1.024s 00:06:12.614 user 0m1.014s 00:06:12.614 sys 0m0.382s 00:06:12.614 03:54:06 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.614 03:54:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.614 ************************************ 00:06:12.614 END TEST alias_rpc 00:06:12.614 ************************************ 00:06:12.614 03:54:06 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:12.614 03:54:06 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:12.614 03:54:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.614 03:54:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.614 03:54:06 -- common/autotest_common.sh@10 -- # set +x 00:06:12.614 ************************************ 00:06:12.614 START TEST spdkcli_tcp 00:06:12.614 ************************************ 00:06:12.614 03:54:06 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:12.614 * Looking for test storage... 00:06:12.614 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:12.614 03:54:06 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.614 03:54:06 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.614 03:54:06 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.872 03:54:07 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.872 03:54:07 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:12.872 03:54:07 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.872 03:54:07 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.872 --rc genhtml_branch_coverage=1 00:06:12.872 --rc genhtml_function_coverage=1 00:06:12.872 --rc genhtml_legend=1 00:06:12.872 --rc geninfo_all_blocks=1 00:06:12.872 --rc geninfo_unexecuted_blocks=1 00:06:12.872 00:06:12.872 ' 00:06:12.872 03:54:07 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.872 --rc genhtml_branch_coverage=1 00:06:12.872 --rc genhtml_function_coverage=1 00:06:12.872 --rc genhtml_legend=1 00:06:12.872 --rc geninfo_all_blocks=1 00:06:12.872 --rc geninfo_unexecuted_blocks=1 00:06:12.872 00:06:12.872 ' 00:06:12.872 03:54:07 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.872 --rc genhtml_branch_coverage=1 00:06:12.872 --rc genhtml_function_coverage=1 00:06:12.872 --rc genhtml_legend=1 00:06:12.872 --rc geninfo_all_blocks=1 00:06:12.872 --rc geninfo_unexecuted_blocks=1 00:06:12.872 00:06:12.872 ' 00:06:12.872 03:54:07 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.873 --rc genhtml_branch_coverage=1 00:06:12.873 --rc genhtml_function_coverage=1 00:06:12.873 --rc genhtml_legend=1 00:06:12.873 --rc geninfo_all_blocks=1 00:06:12.873 --rc geninfo_unexecuted_blocks=1 00:06:12.873 00:06:12.873 ' 00:06:12.873 03:54:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:12.873 03:54:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:12.873 03:54:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:12.873 03:54:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:12.873 03:54:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:12.873 03:54:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:12.873 03:54:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:12.873 03:54:07 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.873 03:54:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.873 03:54:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=597440 00:06:12.873 03:54:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 597440 00:06:12.873 03:54:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:12.873 03:54:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 597440 ']' 00:06:12.873 03:54:07 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.873 03:54:07 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.873 03:54:07 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.873 03:54:07 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.873 03:54:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.873 [2024-12-10 03:54:07.108126] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:12.873 [2024-12-10 03:54:07.108176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597440 ] 00:06:12.873 [2024-12-10 03:54:07.164457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.873 [2024-12-10 03:54:07.204666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.873 [2024-12-10 03:54:07.204670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.131 03:54:07 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.131 03:54:07 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:13.131 03:54:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=597523 00:06:13.131 03:54:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:13.131 03:54:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:13.390 [ 00:06:13.390 "bdev_malloc_delete", 00:06:13.390 "bdev_malloc_create", 00:06:13.390 "bdev_null_resize", 00:06:13.390 "bdev_null_delete", 00:06:13.390 "bdev_null_create", 00:06:13.390 "bdev_nvme_cuse_unregister", 00:06:13.390 "bdev_nvme_cuse_register", 00:06:13.390 "bdev_opal_new_user", 00:06:13.390 "bdev_opal_set_lock_state", 00:06:13.390 "bdev_opal_delete", 00:06:13.390 "bdev_opal_get_info", 00:06:13.390 "bdev_opal_create", 00:06:13.390 "bdev_nvme_opal_revert", 00:06:13.390 "bdev_nvme_opal_init", 00:06:13.390 "bdev_nvme_send_cmd", 00:06:13.390 "bdev_nvme_set_keys", 00:06:13.390 "bdev_nvme_get_path_iostat", 00:06:13.391 "bdev_nvme_get_mdns_discovery_info", 00:06:13.391 "bdev_nvme_stop_mdns_discovery", 00:06:13.391 "bdev_nvme_start_mdns_discovery", 00:06:13.391 "bdev_nvme_set_multipath_policy", 00:06:13.391 "bdev_nvme_set_preferred_path", 00:06:13.391 "bdev_nvme_get_io_paths", 00:06:13.391 "bdev_nvme_remove_error_injection", 00:06:13.391 "bdev_nvme_add_error_injection", 00:06:13.391 "bdev_nvme_get_discovery_info", 00:06:13.391 "bdev_nvme_stop_discovery", 00:06:13.391 "bdev_nvme_start_discovery", 00:06:13.391 "bdev_nvme_get_controller_health_info", 00:06:13.391 "bdev_nvme_disable_controller", 00:06:13.391 "bdev_nvme_enable_controller", 00:06:13.391 "bdev_nvme_reset_controller", 00:06:13.391 "bdev_nvme_get_transport_statistics", 00:06:13.391 "bdev_nvme_apply_firmware", 00:06:13.391 "bdev_nvme_detach_controller", 00:06:13.391 "bdev_nvme_get_controllers", 00:06:13.391 "bdev_nvme_attach_controller", 00:06:13.391 "bdev_nvme_set_hotplug", 00:06:13.391 "bdev_nvme_set_options", 00:06:13.391 "bdev_passthru_delete", 00:06:13.391 "bdev_passthru_create", 00:06:13.391 "bdev_lvol_set_parent_bdev", 00:06:13.391 "bdev_lvol_set_parent", 00:06:13.391 "bdev_lvol_check_shallow_copy", 00:06:13.391 "bdev_lvol_start_shallow_copy", 00:06:13.391 "bdev_lvol_grow_lvstore", 00:06:13.391 "bdev_lvol_get_lvols", 00:06:13.391 "bdev_lvol_get_lvstores", 00:06:13.391 "bdev_lvol_delete", 00:06:13.391 "bdev_lvol_set_read_only", 00:06:13.391 "bdev_lvol_resize", 00:06:13.391 "bdev_lvol_decouple_parent", 00:06:13.391 "bdev_lvol_inflate", 00:06:13.391 "bdev_lvol_rename", 00:06:13.391 "bdev_lvol_clone_bdev", 00:06:13.391 "bdev_lvol_clone", 00:06:13.391 "bdev_lvol_snapshot", 00:06:13.391 "bdev_lvol_create", 00:06:13.391 "bdev_lvol_delete_lvstore", 00:06:13.391 "bdev_lvol_rename_lvstore", 00:06:13.391 "bdev_lvol_create_lvstore", 00:06:13.391 "bdev_raid_set_options", 00:06:13.391 "bdev_raid_remove_base_bdev", 00:06:13.391 "bdev_raid_add_base_bdev", 00:06:13.391 "bdev_raid_delete", 00:06:13.391 "bdev_raid_create", 00:06:13.391 "bdev_raid_get_bdevs", 00:06:13.391 "bdev_error_inject_error", 00:06:13.391 "bdev_error_delete", 00:06:13.391 "bdev_error_create", 00:06:13.391 "bdev_split_delete", 00:06:13.391 "bdev_split_create", 00:06:13.391 "bdev_delay_delete", 00:06:13.391 "bdev_delay_create", 00:06:13.391 "bdev_delay_update_latency", 00:06:13.391 "bdev_zone_block_delete", 00:06:13.391 "bdev_zone_block_create", 00:06:13.391 "blobfs_create", 00:06:13.391 "blobfs_detect", 00:06:13.391 "blobfs_set_cache_size", 00:06:13.391 "bdev_aio_delete", 00:06:13.391 "bdev_aio_rescan", 00:06:13.391 "bdev_aio_create", 00:06:13.391 "bdev_ftl_set_property", 00:06:13.391 "bdev_ftl_get_properties", 00:06:13.391 "bdev_ftl_get_stats", 00:06:13.391 "bdev_ftl_unmap", 00:06:13.391 "bdev_ftl_unload", 00:06:13.391 "bdev_ftl_delete", 00:06:13.391 "bdev_ftl_load", 00:06:13.391 "bdev_ftl_create", 00:06:13.391 "bdev_virtio_attach_controller", 00:06:13.391 "bdev_virtio_scsi_get_devices", 00:06:13.391 "bdev_virtio_detach_controller", 00:06:13.391 "bdev_virtio_blk_set_hotplug", 00:06:13.391 "bdev_iscsi_delete", 00:06:13.391 "bdev_iscsi_create", 00:06:13.391 "bdev_iscsi_set_options", 00:06:13.391 "accel_error_inject_error", 00:06:13.391 "ioat_scan_accel_module", 00:06:13.391 "dsa_scan_accel_module", 00:06:13.391 "iaa_scan_accel_module", 00:06:13.391 "keyring_file_remove_key", 00:06:13.391 "keyring_file_add_key", 00:06:13.391 "keyring_linux_set_options", 00:06:13.391 "fsdev_aio_delete", 00:06:13.391 "fsdev_aio_create", 00:06:13.391 "iscsi_get_histogram", 00:06:13.391 "iscsi_enable_histogram", 00:06:13.391 "iscsi_set_options", 00:06:13.391 "iscsi_get_auth_groups", 00:06:13.391 "iscsi_auth_group_remove_secret", 00:06:13.391 "iscsi_auth_group_add_secret", 00:06:13.391 "iscsi_delete_auth_group", 00:06:13.391 "iscsi_create_auth_group", 00:06:13.391 "iscsi_set_discovery_auth", 00:06:13.391 "iscsi_get_options", 00:06:13.391 "iscsi_target_node_request_logout", 00:06:13.391 "iscsi_target_node_set_redirect", 00:06:13.391 "iscsi_target_node_set_auth", 00:06:13.391 "iscsi_target_node_add_lun", 00:06:13.391 "iscsi_get_stats", 00:06:13.391 "iscsi_get_connections", 00:06:13.391 "iscsi_portal_group_set_auth", 00:06:13.391 "iscsi_start_portal_group", 00:06:13.391 "iscsi_delete_portal_group", 00:06:13.391 "iscsi_create_portal_group", 00:06:13.391 "iscsi_get_portal_groups", 00:06:13.391 "iscsi_delete_target_node", 00:06:13.391 "iscsi_target_node_remove_pg_ig_maps", 00:06:13.391 "iscsi_target_node_add_pg_ig_maps", 00:06:13.391 "iscsi_create_target_node", 00:06:13.391 "iscsi_get_target_nodes", 00:06:13.391 "iscsi_delete_initiator_group", 00:06:13.391 "iscsi_initiator_group_remove_initiators", 00:06:13.391 "iscsi_initiator_group_add_initiators", 00:06:13.391 "iscsi_create_initiator_group", 00:06:13.391 "iscsi_get_initiator_groups", 00:06:13.391 "nvmf_set_crdt", 00:06:13.391 "nvmf_set_config", 00:06:13.391 "nvmf_set_max_subsystems", 00:06:13.391 "nvmf_stop_mdns_prr", 00:06:13.391 "nvmf_publish_mdns_prr", 00:06:13.391 "nvmf_subsystem_get_listeners", 00:06:13.391 "nvmf_subsystem_get_qpairs", 00:06:13.391 "nvmf_subsystem_get_controllers", 00:06:13.391 "nvmf_get_stats", 00:06:13.391 "nvmf_get_transports", 00:06:13.391 "nvmf_create_transport", 00:06:13.391 "nvmf_get_targets", 00:06:13.391 "nvmf_delete_target", 00:06:13.391 "nvmf_create_target", 00:06:13.391 "nvmf_subsystem_allow_any_host", 00:06:13.391 "nvmf_subsystem_set_keys", 00:06:13.391 "nvmf_subsystem_remove_host", 00:06:13.391 "nvmf_subsystem_add_host", 00:06:13.391 "nvmf_ns_remove_host", 00:06:13.391 "nvmf_ns_add_host", 00:06:13.391 "nvmf_subsystem_remove_ns", 00:06:13.391 "nvmf_subsystem_set_ns_ana_group", 00:06:13.391 "nvmf_subsystem_add_ns", 00:06:13.391 "nvmf_subsystem_listener_set_ana_state", 00:06:13.391 "nvmf_discovery_get_referrals", 00:06:13.391 "nvmf_discovery_remove_referral", 00:06:13.391 "nvmf_discovery_add_referral", 00:06:13.391 "nvmf_subsystem_remove_listener", 00:06:13.391 "nvmf_subsystem_add_listener", 00:06:13.391 "nvmf_delete_subsystem", 00:06:13.391 "nvmf_create_subsystem", 00:06:13.391 "nvmf_get_subsystems", 00:06:13.391 "env_dpdk_get_mem_stats", 00:06:13.391 "nbd_get_disks", 00:06:13.391 "nbd_stop_disk", 00:06:13.391 "nbd_start_disk", 00:06:13.391 "ublk_recover_disk", 00:06:13.391 "ublk_get_disks", 00:06:13.391 "ublk_stop_disk", 00:06:13.391 "ublk_start_disk", 00:06:13.391 "ublk_destroy_target", 00:06:13.391 "ublk_create_target", 00:06:13.391 "virtio_blk_create_transport", 00:06:13.391 "virtio_blk_get_transports", 00:06:13.391 "vhost_controller_set_coalescing", 00:06:13.391 "vhost_get_controllers", 00:06:13.391 "vhost_delete_controller", 00:06:13.391 "vhost_create_blk_controller", 00:06:13.391 "vhost_scsi_controller_remove_target", 00:06:13.391 "vhost_scsi_controller_add_target", 00:06:13.391 "vhost_start_scsi_controller", 00:06:13.391 "vhost_create_scsi_controller", 00:06:13.391 "thread_set_cpumask", 00:06:13.391 "scheduler_set_options", 00:06:13.391 "framework_get_governor", 00:06:13.391 "framework_get_scheduler", 00:06:13.391 "framework_set_scheduler", 00:06:13.391 "framework_get_reactors", 00:06:13.391 "thread_get_io_channels", 00:06:13.391 "thread_get_pollers", 00:06:13.391 "thread_get_stats", 00:06:13.391 "framework_monitor_context_switch", 00:06:13.391 "spdk_kill_instance", 00:06:13.391 "log_enable_timestamps", 00:06:13.391 "log_get_flags", 00:06:13.391 "log_clear_flag", 00:06:13.391 "log_set_flag", 00:06:13.391 "log_get_level", 00:06:13.391 "log_set_level", 00:06:13.391 "log_get_print_level", 00:06:13.391 "log_set_print_level", 00:06:13.391 "framework_enable_cpumask_locks", 00:06:13.391 "framework_disable_cpumask_locks", 00:06:13.391 "framework_wait_init", 00:06:13.391 "framework_start_init", 00:06:13.391 "scsi_get_devices", 00:06:13.391 "bdev_get_histogram", 00:06:13.391 "bdev_enable_histogram", 00:06:13.391 "bdev_set_qos_limit", 00:06:13.391 "bdev_set_qd_sampling_period", 00:06:13.391 "bdev_get_bdevs", 00:06:13.391 "bdev_reset_iostat", 00:06:13.391 "bdev_get_iostat", 00:06:13.391 "bdev_examine", 00:06:13.391 "bdev_wait_for_examine", 00:06:13.391 "bdev_set_options", 00:06:13.391 "accel_get_stats", 00:06:13.391 "accel_set_options", 00:06:13.391 "accel_set_driver", 00:06:13.391 "accel_crypto_key_destroy", 00:06:13.391 "accel_crypto_keys_get", 00:06:13.391 "accel_crypto_key_create", 00:06:13.391 "accel_assign_opc", 00:06:13.391 "accel_get_module_info", 00:06:13.391 "accel_get_opc_assignments", 00:06:13.391 "vmd_rescan", 00:06:13.391 "vmd_remove_device", 00:06:13.391 "vmd_enable", 00:06:13.391 "sock_get_default_impl", 00:06:13.391 "sock_set_default_impl", 00:06:13.391 "sock_impl_set_options", 00:06:13.391 "sock_impl_get_options", 00:06:13.391 "iobuf_get_stats", 00:06:13.391 "iobuf_set_options", 00:06:13.391 "keyring_get_keys", 00:06:13.391 "framework_get_pci_devices", 00:06:13.391 "framework_get_config", 00:06:13.391 "framework_get_subsystems", 00:06:13.391 "fsdev_set_opts", 00:06:13.391 "fsdev_get_opts", 00:06:13.391 "trace_get_info", 00:06:13.391 "trace_get_tpoint_group_mask", 00:06:13.391 "trace_disable_tpoint_group", 00:06:13.391 "trace_enable_tpoint_group", 00:06:13.391 "trace_clear_tpoint_mask", 00:06:13.391 "trace_set_tpoint_mask", 00:06:13.391 "notify_get_notifications", 00:06:13.391 "notify_get_types", 00:06:13.391 "spdk_get_version", 00:06:13.391 "rpc_get_methods" 00:06:13.391 ] 00:06:13.391 03:54:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:13.391 03:54:07 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.392 03:54:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.392 03:54:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:13.392 03:54:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 597440 00:06:13.392 03:54:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 597440 ']' 00:06:13.392 03:54:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 597440 00:06:13.392 03:54:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:13.392 03:54:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.392 03:54:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597440 00:06:13.392 03:54:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.392 03:54:07 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.392 03:54:07 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597440' 00:06:13.392 killing process with pid 597440 00:06:13.392 03:54:07 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 597440 00:06:13.392 03:54:07 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 597440 00:06:13.651 00:06:13.651 real 0m1.088s 00:06:13.651 user 0m1.821s 00:06:13.651 sys 0m0.422s 00:06:13.651 03:54:07 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.651 03:54:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.651 ************************************ 00:06:13.651 END TEST spdkcli_tcp 00:06:13.651 ************************************ 00:06:13.651 03:54:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:13.651 03:54:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.651 03:54:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.651 03:54:07 -- common/autotest_common.sh@10 -- # set +x 00:06:13.651 ************************************ 00:06:13.651 START TEST dpdk_mem_utility 00:06:13.651 ************************************ 00:06:13.651 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:13.909 * Looking for test storage... 00:06:13.909 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:13.909 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:13.909 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:13.909 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:13.909 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:13.909 03:54:08 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.909 03:54:08 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.909 03:54:08 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.909 03:54:08 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.909 03:54:08 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.909 03:54:08 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.909 03:54:08 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.909 03:54:08 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.909 03:54:08 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.910 03:54:08 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:13.910 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.910 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:13.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.910 --rc genhtml_branch_coverage=1 00:06:13.910 --rc genhtml_function_coverage=1 00:06:13.910 --rc genhtml_legend=1 00:06:13.910 --rc geninfo_all_blocks=1 00:06:13.910 --rc geninfo_unexecuted_blocks=1 00:06:13.910 00:06:13.910 ' 00:06:13.910 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:13.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.910 --rc genhtml_branch_coverage=1 00:06:13.910 --rc genhtml_function_coverage=1 00:06:13.910 --rc genhtml_legend=1 00:06:13.910 --rc geninfo_all_blocks=1 00:06:13.910 --rc geninfo_unexecuted_blocks=1 00:06:13.910 00:06:13.910 ' 00:06:13.910 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:13.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.910 --rc genhtml_branch_coverage=1 00:06:13.910 --rc genhtml_function_coverage=1 00:06:13.910 --rc genhtml_legend=1 00:06:13.910 --rc geninfo_all_blocks=1 00:06:13.910 --rc geninfo_unexecuted_blocks=1 00:06:13.910 00:06:13.910 ' 00:06:13.910 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:13.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.910 --rc genhtml_branch_coverage=1 00:06:13.910 --rc genhtml_function_coverage=1 00:06:13.910 --rc genhtml_legend=1 00:06:13.910 --rc geninfo_all_blocks=1 00:06:13.910 --rc geninfo_unexecuted_blocks=1 00:06:13.910 00:06:13.910 ' 00:06:13.910 03:54:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:13.910 03:54:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=597616 00:06:13.910 03:54:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 597616 00:06:13.910 03:54:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:13.910 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 597616 ']' 00:06:13.910 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.910 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.910 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.910 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.910 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:13.910 [2024-12-10 03:54:08.252237] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:13.910 [2024-12-10 03:54:08.252288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597616 ] 00:06:14.168 [2024-12-10 03:54:08.309324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.168 [2024-12-10 03:54:08.348541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.427 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.427 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:14.427 03:54:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:14.427 03:54:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:14.427 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.427 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.427 { 00:06:14.427 "filename": "/tmp/spdk_mem_dump.txt" 00:06:14.427 } 00:06:14.427 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.427 03:54:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:14.427 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:14.427 1 heaps totaling size 818.000000 MiB 00:06:14.427 size: 818.000000 MiB heap id: 0 00:06:14.427 end heaps---------- 00:06:14.427 9 mempools totaling size 603.782043 MiB 00:06:14.427 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:14.427 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:14.427 size: 100.555481 MiB name: bdev_io_597616 00:06:14.427 size: 50.003479 MiB name: msgpool_597616 00:06:14.427 size: 36.509338 MiB name: fsdev_io_597616 00:06:14.427 size: 21.763794 MiB name: PDU_Pool 00:06:14.427 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:14.427 size: 4.133484 MiB name: evtpool_597616 00:06:14.427 size: 0.026123 MiB name: Session_Pool 00:06:14.427 end mempools------- 00:06:14.427 6 memzones totaling size 4.142822 MiB 00:06:14.427 size: 1.000366 MiB name: RG_ring_0_597616 00:06:14.427 size: 1.000366 MiB name: RG_ring_1_597616 00:06:14.427 size: 1.000366 MiB name: RG_ring_4_597616 00:06:14.427 size: 1.000366 MiB name: RG_ring_5_597616 00:06:14.427 size: 0.125366 MiB name: RG_ring_2_597616 00:06:14.427 size: 0.015991 MiB name: RG_ring_3_597616 00:06:14.427 end memzones------- 00:06:14.427 03:54:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:14.427 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:14.427 list of free elements. size: 10.852478 MiB 00:06:14.427 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:14.427 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:14.427 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:14.427 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:14.427 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:14.427 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:14.427 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:14.427 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:14.428 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:14.428 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:14.428 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:14.428 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:14.428 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:14.428 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:14.428 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:14.428 list of standard malloc elements. size: 199.218628 MiB 00:06:14.428 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:14.428 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:14.428 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:14.428 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:14.428 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:14.428 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:14.428 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:14.428 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:14.428 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:14.428 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:14.428 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:14.428 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:14.428 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:14.428 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:14.428 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:14.428 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:14.428 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:14.428 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:14.428 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:14.428 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:14.428 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:14.428 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:14.428 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:14.428 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:14.428 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:14.428 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:14.428 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:14.428 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:14.428 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:14.428 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:14.428 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:14.428 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:14.428 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:14.428 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:14.428 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:14.428 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:14.428 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:14.428 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:14.428 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:14.428 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:14.428 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:14.428 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:14.428 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:14.428 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:14.428 list of memzone associated elements. size: 607.928894 MiB 00:06:14.428 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:14.428 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:14.428 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:14.428 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:14.428 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:14.428 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_597616_0 00:06:14.428 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:14.428 associated memzone info: size: 48.002930 MiB name: MP_msgpool_597616_0 00:06:14.428 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:14.428 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_597616_0 00:06:14.428 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:14.428 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:14.428 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:14.428 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:14.428 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:14.428 associated memzone info: size: 3.000122 MiB name: MP_evtpool_597616_0 00:06:14.428 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:14.428 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_597616 00:06:14.428 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:14.428 associated memzone info: size: 1.007996 MiB name: MP_evtpool_597616 00:06:14.428 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:14.428 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:14.428 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:14.428 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:14.428 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:14.428 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:14.428 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:14.428 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:14.428 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:14.428 associated memzone info: size: 1.000366 MiB name: RG_ring_0_597616 00:06:14.428 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:14.428 associated memzone info: size: 1.000366 MiB name: RG_ring_1_597616 00:06:14.428 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:14.428 associated memzone info: size: 1.000366 MiB name: RG_ring_4_597616 00:06:14.428 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:14.428 associated memzone info: size: 1.000366 MiB name: RG_ring_5_597616 00:06:14.428 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:14.428 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_597616 00:06:14.428 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:14.428 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_597616 00:06:14.428 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:14.428 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:14.428 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:14.428 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:14.428 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:14.428 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:14.428 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:14.428 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_597616 00:06:14.428 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:14.428 associated memzone info: size: 0.125366 MiB name: RG_ring_2_597616 00:06:14.428 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:14.428 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:14.428 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:14.428 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:14.428 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:14.428 associated memzone info: size: 0.015991 MiB name: RG_ring_3_597616 00:06:14.428 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:14.428 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:14.428 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:14.428 associated memzone info: size: 0.000183 MiB name: MP_msgpool_597616 00:06:14.428 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:14.428 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_597616 00:06:14.428 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:14.428 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_597616 00:06:14.428 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:14.428 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:14.428 03:54:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:14.428 03:54:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 597616 00:06:14.428 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 597616 ']' 00:06:14.428 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 597616 00:06:14.428 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:14.428 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.428 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597616 00:06:14.428 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.428 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.428 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597616' 00:06:14.428 killing process with pid 597616 00:06:14.428 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 597616 00:06:14.428 03:54:08 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 597616 00:06:14.687 00:06:14.687 real 0m0.975s 00:06:14.687 user 0m0.910s 00:06:14.687 sys 0m0.395s 00:06:14.687 03:54:09 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.687 03:54:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.687 ************************************ 00:06:14.687 END TEST dpdk_mem_utility 00:06:14.687 ************************************ 00:06:14.687 03:54:09 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:14.687 03:54:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.687 03:54:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.687 03:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:14.946 ************************************ 00:06:14.946 START TEST event 00:06:14.946 ************************************ 00:06:14.946 03:54:09 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:14.946 * Looking for test storage... 00:06:14.946 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:14.946 03:54:09 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:14.946 03:54:09 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:14.946 03:54:09 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:14.946 03:54:09 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:14.946 03:54:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.946 03:54:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.946 03:54:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.946 03:54:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.946 03:54:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.946 03:54:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.946 03:54:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.946 03:54:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.946 03:54:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.946 03:54:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.946 03:54:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.946 03:54:09 event -- scripts/common.sh@344 -- # case "$op" in 00:06:14.946 03:54:09 event -- scripts/common.sh@345 -- # : 1 00:06:14.946 03:54:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.946 03:54:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.946 03:54:09 event -- scripts/common.sh@365 -- # decimal 1 00:06:14.946 03:54:09 event -- scripts/common.sh@353 -- # local d=1 00:06:14.946 03:54:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.946 03:54:09 event -- scripts/common.sh@355 -- # echo 1 00:06:14.946 03:54:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.946 03:54:09 event -- scripts/common.sh@366 -- # decimal 2 00:06:14.946 03:54:09 event -- scripts/common.sh@353 -- # local d=2 00:06:14.946 03:54:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.946 03:54:09 event -- scripts/common.sh@355 -- # echo 2 00:06:14.946 03:54:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.946 03:54:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.946 03:54:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.946 03:54:09 event -- scripts/common.sh@368 -- # return 0 00:06:14.946 03:54:09 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.946 03:54:09 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:14.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.946 --rc genhtml_branch_coverage=1 00:06:14.946 --rc genhtml_function_coverage=1 00:06:14.946 --rc genhtml_legend=1 00:06:14.946 --rc geninfo_all_blocks=1 00:06:14.946 --rc geninfo_unexecuted_blocks=1 00:06:14.946 00:06:14.946 ' 00:06:14.946 03:54:09 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:14.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.946 --rc genhtml_branch_coverage=1 00:06:14.946 --rc genhtml_function_coverage=1 00:06:14.946 --rc genhtml_legend=1 00:06:14.946 --rc geninfo_all_blocks=1 00:06:14.946 --rc geninfo_unexecuted_blocks=1 00:06:14.946 00:06:14.946 ' 00:06:14.946 03:54:09 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:14.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.946 --rc genhtml_branch_coverage=1 00:06:14.947 --rc genhtml_function_coverage=1 00:06:14.947 --rc genhtml_legend=1 00:06:14.947 --rc geninfo_all_blocks=1 00:06:14.947 --rc geninfo_unexecuted_blocks=1 00:06:14.947 00:06:14.947 ' 00:06:14.947 03:54:09 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:14.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.947 --rc genhtml_branch_coverage=1 00:06:14.947 --rc genhtml_function_coverage=1 00:06:14.947 --rc genhtml_legend=1 00:06:14.947 --rc geninfo_all_blocks=1 00:06:14.947 --rc geninfo_unexecuted_blocks=1 00:06:14.947 00:06:14.947 ' 00:06:14.947 03:54:09 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:14.947 03:54:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:14.947 03:54:09 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:14.947 03:54:09 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:14.947 03:54:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.947 03:54:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.947 ************************************ 00:06:14.947 START TEST event_perf 00:06:14.947 ************************************ 00:06:14.947 03:54:09 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:14.947 Running I/O for 1 seconds...[2024-12-10 03:54:09.284129] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:14.947 [2024-12-10 03:54:09.284200] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597934 ] 00:06:15.206 [2024-12-10 03:54:09.344570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.206 [2024-12-10 03:54:09.384526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.206 [2024-12-10 03:54:09.384613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.206 [2024-12-10 03:54:09.384702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.206 [2024-12-10 03:54:09.384704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.141 Running I/O for 1 seconds... 00:06:16.141 lcore 0: 217794 00:06:16.141 lcore 1: 217796 00:06:16.141 lcore 2: 217794 00:06:16.141 lcore 3: 217794 00:06:16.141 done. 00:06:16.141 00:06:16.141 real 0m1.161s 00:06:16.141 user 0m4.091s 00:06:16.141 sys 0m0.067s 00:06:16.141 03:54:10 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.142 03:54:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.142 ************************************ 00:06:16.142 END TEST event_perf 00:06:16.142 ************************************ 00:06:16.142 03:54:10 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:16.142 03:54:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:16.142 03:54:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.142 03:54:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.142 ************************************ 00:06:16.142 START TEST event_reactor 00:06:16.142 ************************************ 00:06:16.142 03:54:10 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:16.142 [2024-12-10 03:54:10.517101] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:16.142 [2024-12-10 03:54:10.517174] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598216 ] 00:06:16.400 [2024-12-10 03:54:10.580314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.400 [2024-12-10 03:54:10.618676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.337 test_start 00:06:17.337 oneshot 00:06:17.337 tick 100 00:06:17.337 tick 100 00:06:17.337 tick 250 00:06:17.337 tick 100 00:06:17.337 tick 100 00:06:17.337 tick 250 00:06:17.337 tick 100 00:06:17.337 tick 500 00:06:17.337 tick 100 00:06:17.337 tick 100 00:06:17.337 tick 250 00:06:17.337 tick 100 00:06:17.337 tick 100 00:06:17.337 test_end 00:06:17.337 00:06:17.337 real 0m1.159s 00:06:17.337 user 0m1.093s 00:06:17.337 sys 0m0.062s 00:06:17.337 03:54:11 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.337 03:54:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:17.337 ************************************ 00:06:17.337 END TEST event_reactor 00:06:17.337 ************************************ 00:06:17.337 03:54:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:17.337 03:54:11 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:17.337 03:54:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.337 03:54:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.596 ************************************ 00:06:17.596 START TEST event_reactor_perf 00:06:17.596 ************************************ 00:06:17.596 03:54:11 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:17.596 [2024-12-10 03:54:11.747609] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:17.596 [2024-12-10 03:54:11.747679] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598496 ] 00:06:17.596 [2024-12-10 03:54:11.812495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.596 [2024-12-10 03:54:11.849195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.531 test_start 00:06:18.531 test_end 00:06:18.531 Performance: 559193 events per second 00:06:18.531 00:06:18.531 real 0m1.158s 00:06:18.531 user 0m1.095s 00:06:18.531 sys 0m0.059s 00:06:18.531 03:54:12 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.531 03:54:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.531 ************************************ 00:06:18.531 END TEST event_reactor_perf 00:06:18.531 ************************************ 00:06:18.790 03:54:12 event -- event/event.sh@49 -- # uname -s 00:06:18.791 03:54:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:18.791 03:54:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:18.791 03:54:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.791 03:54:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.791 03:54:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.791 ************************************ 00:06:18.791 START TEST event_scheduler 00:06:18.791 ************************************ 00:06:18.791 03:54:12 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:18.791 * Looking for test storage... 00:06:18.791 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.791 03:54:13 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:18.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.791 --rc genhtml_branch_coverage=1 00:06:18.791 --rc genhtml_function_coverage=1 00:06:18.791 --rc genhtml_legend=1 00:06:18.791 --rc geninfo_all_blocks=1 00:06:18.791 --rc geninfo_unexecuted_blocks=1 00:06:18.791 00:06:18.791 ' 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:18.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.791 --rc genhtml_branch_coverage=1 00:06:18.791 --rc genhtml_function_coverage=1 00:06:18.791 --rc genhtml_legend=1 00:06:18.791 --rc geninfo_all_blocks=1 00:06:18.791 --rc geninfo_unexecuted_blocks=1 00:06:18.791 00:06:18.791 ' 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:18.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.791 --rc genhtml_branch_coverage=1 00:06:18.791 --rc genhtml_function_coverage=1 00:06:18.791 --rc genhtml_legend=1 00:06:18.791 --rc geninfo_all_blocks=1 00:06:18.791 --rc geninfo_unexecuted_blocks=1 00:06:18.791 00:06:18.791 ' 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:18.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.791 --rc genhtml_branch_coverage=1 00:06:18.791 --rc genhtml_function_coverage=1 00:06:18.791 --rc genhtml_legend=1 00:06:18.791 --rc geninfo_all_blocks=1 00:06:18.791 --rc geninfo_unexecuted_blocks=1 00:06:18.791 00:06:18.791 ' 00:06:18.791 03:54:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:18.791 03:54:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=598808 00:06:18.791 03:54:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.791 03:54:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:18.791 03:54:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 598808 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 598808 ']' 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.791 03:54:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.791 [2024-12-10 03:54:13.171596] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:18.791 [2024-12-10 03:54:13.171642] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598808 ] 00:06:19.050 [2024-12-10 03:54:13.226731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.050 [2024-12-10 03:54:13.269098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.050 [2024-12-10 03:54:13.269184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.050 [2024-12-10 03:54:13.269273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.051 [2024-12-10 03:54:13.269274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.051 03:54:13 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.051 03:54:13 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:19.051 03:54:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:19.051 03:54:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.051 03:54:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.051 [2024-12-10 03:54:13.313788] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:19.051 [2024-12-10 03:54:13.313806] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:19.051 [2024-12-10 03:54:13.313815] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:19.051 [2024-12-10 03:54:13.313820] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:19.051 [2024-12-10 03:54:13.313825] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:19.051 03:54:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.051 03:54:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:19.051 03:54:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.051 03:54:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.051 [2024-12-10 03:54:13.388309] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:19.051 03:54:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.051 03:54:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:19.051 03:54:13 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.051 03:54:13 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.051 03:54:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.051 ************************************ 00:06:19.051 START TEST scheduler_create_thread 00:06:19.051 ************************************ 00:06:19.051 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:19.051 03:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:19.051 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.051 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 2 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 3 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 4 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 5 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 6 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 7 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 8 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 9 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 10 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.310 03:54:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.686 03:54:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.686 03:54:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:20.686 03:54:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:20.686 03:54:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.686 03:54:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.061 03:54:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.061 00:06:22.061 real 0m2.621s 00:06:22.061 user 0m0.023s 00:06:22.061 sys 0m0.005s 00:06:22.061 03:54:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.061 03:54:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.061 ************************************ 00:06:22.061 END TEST scheduler_create_thread 00:06:22.062 ************************************ 00:06:22.062 03:54:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:22.062 03:54:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 598808 00:06:22.062 03:54:16 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 598808 ']' 00:06:22.062 03:54:16 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 598808 00:06:22.062 03:54:16 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:22.062 03:54:16 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.062 03:54:16 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 598808 00:06:22.062 03:54:16 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:22.062 03:54:16 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:22.062 03:54:16 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 598808' 00:06:22.062 killing process with pid 598808 00:06:22.062 03:54:16 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 598808 00:06:22.062 03:54:16 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 598808 00:06:22.320 [2024-12-10 03:54:16.522273] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:22.320 00:06:22.320 real 0m3.727s 00:06:22.320 user 0m5.571s 00:06:22.320 sys 0m0.343s 00:06:22.320 03:54:16 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.320 03:54:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.320 ************************************ 00:06:22.320 END TEST event_scheduler 00:06:22.320 ************************************ 00:06:22.579 03:54:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:22.579 03:54:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:22.580 03:54:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.580 03:54:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.580 03:54:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.580 ************************************ 00:06:22.580 START TEST app_repeat 00:06:22.580 ************************************ 00:06:22.580 03:54:16 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:22.580 03:54:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.580 03:54:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.580 03:54:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:22.580 03:54:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.580 03:54:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:22.580 03:54:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:22.580 03:54:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:22.580 03:54:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=599407 00:06:22.580 03:54:16 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:22.580 03:54:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.580 03:54:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 599407' 00:06:22.580 Process app_repeat pid: 599407 00:06:22.580 03:54:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.580 03:54:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:22.580 spdk_app_start Round 0 00:06:22.580 03:54:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 599407 /var/tmp/spdk-nbd.sock 00:06:22.580 03:54:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 599407 ']' 00:06:22.580 03:54:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.580 03:54:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.580 03:54:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.580 03:54:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.580 03:54:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.580 [2024-12-10 03:54:16.798087] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:22.580 [2024-12-10 03:54:16.798149] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid599407 ] 00:06:22.580 [2024-12-10 03:54:16.858894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.580 [2024-12-10 03:54:16.898700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.580 [2024-12-10 03:54:16.898702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.839 03:54:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.839 03:54:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:22.839 03:54:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.839 Malloc0 00:06:22.839 03:54:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.097 Malloc1 00:06:23.097 03:54:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.097 03:54:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.356 /dev/nbd0 00:06:23.356 03:54:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.356 03:54:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.356 03:54:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:23.356 03:54:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.356 03:54:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.356 03:54:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.356 03:54:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:23.356 03:54:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.356 03:54:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.356 03:54:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.356 03:54:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.356 1+0 records in 00:06:23.356 1+0 records out 00:06:23.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000120989 s, 33.9 MB/s 00:06:23.356 03:54:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:23.356 03:54:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.356 03:54:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:23.356 03:54:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.356 03:54:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.356 03:54:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.356 03:54:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.356 03:54:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.615 /dev/nbd1 00:06:23.615 03:54:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.615 03:54:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.615 03:54:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:23.615 03:54:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.615 03:54:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.615 03:54:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.615 03:54:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:23.615 03:54:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.615 03:54:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.615 03:54:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.615 03:54:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.615 1+0 records in 00:06:23.615 1+0 records out 00:06:23.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242223 s, 16.9 MB/s 00:06:23.615 03:54:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:23.615 03:54:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.615 03:54:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:23.615 03:54:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.615 03:54:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.615 03:54:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.615 03:54:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.615 03:54:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.615 03:54:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.615 03:54:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.615 03:54:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.615 { 00:06:23.615 "nbd_device": "/dev/nbd0", 00:06:23.615 "bdev_name": "Malloc0" 00:06:23.615 }, 00:06:23.615 { 00:06:23.615 "nbd_device": "/dev/nbd1", 00:06:23.615 "bdev_name": "Malloc1" 00:06:23.615 } 00:06:23.615 ]' 00:06:23.615 03:54:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.615 { 00:06:23.615 "nbd_device": "/dev/nbd0", 00:06:23.615 "bdev_name": "Malloc0" 00:06:23.615 }, 00:06:23.615 { 00:06:23.615 "nbd_device": "/dev/nbd1", 00:06:23.615 "bdev_name": "Malloc1" 00:06:23.615 } 00:06:23.615 ]' 00:06:23.615 03:54:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.615 03:54:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.615 /dev/nbd1' 00:06:23.616 03:54:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.616 /dev/nbd1' 00:06:23.616 03:54:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.875 256+0 records in 00:06:23.875 256+0 records out 00:06:23.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100639 s, 104 MB/s 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.875 256+0 records in 00:06:23.875 256+0 records out 00:06:23.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130457 s, 80.4 MB/s 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.875 256+0 records in 00:06:23.875 256+0 records out 00:06:23.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138837 s, 75.5 MB/s 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.875 03:54:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.134 03:54:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.134 03:54:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.134 03:54:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.134 03:54:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.134 03:54:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.134 03:54:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.134 03:54:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.134 03:54:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.134 03:54:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.134 03:54:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.134 03:54:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.393 03:54:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.393 03:54:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.393 03:54:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.393 03:54:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.393 03:54:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.393 03:54:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.393 03:54:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.393 03:54:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.393 03:54:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.393 03:54:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.393 03:54:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.393 03:54:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.393 03:54:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.653 03:54:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.653 [2024-12-10 03:54:19.009165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.912 [2024-12-10 03:54:19.043248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.912 [2024-12-10 03:54:19.043251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.912 [2024-12-10 03:54:19.083055] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.912 [2024-12-10 03:54:19.083092] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.201 03:54:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.201 03:54:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:28.201 spdk_app_start Round 1 00:06:28.201 03:54:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 599407 /var/tmp/spdk-nbd.sock 00:06:28.201 03:54:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 599407 ']' 00:06:28.201 03:54:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.201 03:54:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.201 03:54:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.201 03:54:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.201 03:54:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.201 03:54:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.201 03:54:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:28.201 03:54:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.201 Malloc0 00:06:28.201 03:54:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.201 Malloc1 00:06:28.201 03:54:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.201 03:54:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.201 /dev/nbd0 00:06:28.460 03:54:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.460 03:54:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.460 03:54:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:28.460 03:54:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:28.460 03:54:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.460 03:54:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.460 03:54:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:28.460 03:54:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:28.460 03:54:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.460 03:54:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.460 03:54:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.460 1+0 records in 00:06:28.460 1+0 records out 00:06:28.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020038 s, 20.4 MB/s 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:28.461 03:54:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.461 03:54:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.461 03:54:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.461 /dev/nbd1 00:06:28.461 03:54:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.461 03:54:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.461 1+0 records in 00:06:28.461 1+0 records out 00:06:28.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199242 s, 20.6 MB/s 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.461 03:54:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:28.461 03:54:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.461 03:54:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.461 03:54:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.720 03:54:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.720 03:54:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.720 { 00:06:28.720 "nbd_device": "/dev/nbd0", 00:06:28.720 "bdev_name": "Malloc0" 00:06:28.720 }, 00:06:28.720 { 00:06:28.720 "nbd_device": "/dev/nbd1", 00:06:28.720 "bdev_name": "Malloc1" 00:06:28.720 } 00:06:28.720 ]' 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.720 { 00:06:28.720 "nbd_device": "/dev/nbd0", 00:06:28.720 "bdev_name": "Malloc0" 00:06:28.720 }, 00:06:28.720 { 00:06:28.720 "nbd_device": "/dev/nbd1", 00:06:28.720 "bdev_name": "Malloc1" 00:06:28.720 } 00:06:28.720 ]' 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.720 /dev/nbd1' 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.720 /dev/nbd1' 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.720 256+0 records in 00:06:28.720 256+0 records out 00:06:28.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105959 s, 99.0 MB/s 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.720 256+0 records in 00:06:28.720 256+0 records out 00:06:28.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128978 s, 81.3 MB/s 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.720 03:54:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.979 256+0 records in 00:06:28.979 256+0 records out 00:06:28.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143593 s, 73.0 MB/s 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.979 03:54:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.239 03:54:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.239 03:54:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.239 03:54:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.239 03:54:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.239 03:54:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.239 03:54:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.239 03:54:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.239 03:54:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.239 03:54:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.239 03:54:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.239 03:54:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.498 03:54:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.498 03:54:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.498 03:54:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.498 03:54:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.498 03:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.498 03:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.498 03:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.498 03:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.498 03:54:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.498 03:54:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.498 03:54:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.498 03:54:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.498 03:54:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.756 03:54:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.757 [2024-12-10 03:54:24.079811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.757 [2024-12-10 03:54:24.113609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.757 [2024-12-10 03:54:24.113611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.036 [2024-12-10 03:54:24.154347] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.036 [2024-12-10 03:54:24.154384] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:32.616 03:54:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:32.616 03:54:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:32.616 spdk_app_start Round 2 00:06:32.616 03:54:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 599407 /var/tmp/spdk-nbd.sock 00:06:32.616 03:54:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 599407 ']' 00:06:32.616 03:54:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.616 03:54:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.616 03:54:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.616 03:54:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.616 03:54:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.875 03:54:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.875 03:54:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:32.875 03:54:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.134 Malloc0 00:06:33.134 03:54:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.134 Malloc1 00:06:33.134 03:54:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.134 03:54:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:33.393 /dev/nbd0 00:06:33.393 03:54:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.393 03:54:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.393 03:54:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:33.393 03:54:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:33.393 03:54:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:33.393 03:54:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:33.393 03:54:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:33.393 03:54:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:33.393 03:54:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:33.393 03:54:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:33.393 03:54:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.393 1+0 records in 00:06:33.393 1+0 records out 00:06:33.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228255 s, 17.9 MB/s 00:06:33.393 03:54:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:33.393 03:54:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:33.393 03:54:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:33.393 03:54:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:33.393 03:54:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:33.393 03:54:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.393 03:54:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.393 03:54:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.652 /dev/nbd1 00:06:33.652 03:54:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.652 03:54:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.652 03:54:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:33.652 03:54:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:33.652 03:54:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:33.652 03:54:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:33.652 03:54:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:33.652 03:54:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:33.652 03:54:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:33.652 03:54:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:33.652 03:54:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.652 1+0 records in 00:06:33.652 1+0 records out 00:06:33.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000137543 s, 29.8 MB/s 00:06:33.652 03:54:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:33.652 03:54:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:33.652 03:54:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:33.652 03:54:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:33.652 03:54:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:33.652 03:54:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.652 03:54:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.652 03:54:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.652 03:54:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.652 03:54:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.912 { 00:06:33.912 "nbd_device": "/dev/nbd0", 00:06:33.912 "bdev_name": "Malloc0" 00:06:33.912 }, 00:06:33.912 { 00:06:33.912 "nbd_device": "/dev/nbd1", 00:06:33.912 "bdev_name": "Malloc1" 00:06:33.912 } 00:06:33.912 ]' 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.912 { 00:06:33.912 "nbd_device": "/dev/nbd0", 00:06:33.912 "bdev_name": "Malloc0" 00:06:33.912 }, 00:06:33.912 { 00:06:33.912 "nbd_device": "/dev/nbd1", 00:06:33.912 "bdev_name": "Malloc1" 00:06:33.912 } 00:06:33.912 ]' 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.912 /dev/nbd1' 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.912 /dev/nbd1' 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.912 256+0 records in 00:06:33.912 256+0 records out 00:06:33.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106178 s, 98.8 MB/s 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.912 256+0 records in 00:06:33.912 256+0 records out 00:06:33.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132432 s, 79.2 MB/s 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.912 256+0 records in 00:06:33.912 256+0 records out 00:06:33.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140841 s, 74.5 MB/s 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.912 03:54:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.171 03:54:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.171 03:54:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.171 03:54:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.171 03:54:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.171 03:54:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.171 03:54:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.171 03:54:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.171 03:54:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.171 03:54:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.171 03:54:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.430 03:54:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.430 03:54:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.689 03:54:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.948 [2024-12-10 03:54:29.131466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.948 [2024-12-10 03:54:29.164909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.948 [2024-12-10 03:54:29.164911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.948 [2024-12-10 03:54:29.204808] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.948 [2024-12-10 03:54:29.204846] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.236 03:54:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 599407 /var/tmp/spdk-nbd.sock 00:06:38.237 03:54:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 599407 ']' 00:06:38.237 03:54:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.237 03:54:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.237 03:54:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.237 03:54:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.237 03:54:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.237 03:54:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.237 03:54:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:38.237 03:54:32 event.app_repeat -- event/event.sh@39 -- # killprocess 599407 00:06:38.237 03:54:32 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 599407 ']' 00:06:38.237 03:54:32 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 599407 00:06:38.237 03:54:32 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:38.237 03:54:32 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.237 03:54:32 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 599407 00:06:38.237 03:54:32 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.237 03:54:32 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.237 03:54:32 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 599407' 00:06:38.237 killing process with pid 599407 00:06:38.237 03:54:32 event.app_repeat -- common/autotest_common.sh@973 -- # kill 599407 00:06:38.237 03:54:32 event.app_repeat -- common/autotest_common.sh@978 -- # wait 599407 00:06:38.237 spdk_app_start is called in Round 0. 00:06:38.237 Shutdown signal received, stop current app iteration 00:06:38.237 Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 reinitialization... 00:06:38.237 spdk_app_start is called in Round 1. 00:06:38.237 Shutdown signal received, stop current app iteration 00:06:38.237 Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 reinitialization... 00:06:38.237 spdk_app_start is called in Round 2. 00:06:38.237 Shutdown signal received, stop current app iteration 00:06:38.237 Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 reinitialization... 00:06:38.237 spdk_app_start is called in Round 3. 00:06:38.237 Shutdown signal received, stop current app iteration 00:06:38.237 03:54:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:38.237 03:54:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:38.237 00:06:38.237 real 0m15.578s 00:06:38.237 user 0m33.814s 00:06:38.237 sys 0m2.393s 00:06:38.237 03:54:32 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.237 03:54:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.237 ************************************ 00:06:38.237 END TEST app_repeat 00:06:38.237 ************************************ 00:06:38.237 03:54:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:38.237 03:54:32 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:38.237 03:54:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.237 03:54:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.237 03:54:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.237 ************************************ 00:06:38.237 START TEST cpu_locks 00:06:38.237 ************************************ 00:06:38.237 03:54:32 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:38.237 * Looking for test storage... 00:06:38.237 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:38.237 03:54:32 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:38.237 03:54:32 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:38.237 03:54:32 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:38.237 03:54:32 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.237 03:54:32 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:38.237 03:54:32 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.237 03:54:32 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:38.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.237 --rc genhtml_branch_coverage=1 00:06:38.237 --rc genhtml_function_coverage=1 00:06:38.237 --rc genhtml_legend=1 00:06:38.237 --rc geninfo_all_blocks=1 00:06:38.237 --rc geninfo_unexecuted_blocks=1 00:06:38.237 00:06:38.237 ' 00:06:38.237 03:54:32 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:38.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.237 --rc genhtml_branch_coverage=1 00:06:38.237 --rc genhtml_function_coverage=1 00:06:38.237 --rc genhtml_legend=1 00:06:38.237 --rc geninfo_all_blocks=1 00:06:38.237 --rc geninfo_unexecuted_blocks=1 00:06:38.237 00:06:38.237 ' 00:06:38.237 03:54:32 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:38.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.237 --rc genhtml_branch_coverage=1 00:06:38.237 --rc genhtml_function_coverage=1 00:06:38.237 --rc genhtml_legend=1 00:06:38.237 --rc geninfo_all_blocks=1 00:06:38.237 --rc geninfo_unexecuted_blocks=1 00:06:38.237 00:06:38.237 ' 00:06:38.237 03:54:32 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:38.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.237 --rc genhtml_branch_coverage=1 00:06:38.237 --rc genhtml_function_coverage=1 00:06:38.237 --rc genhtml_legend=1 00:06:38.237 --rc geninfo_all_blocks=1 00:06:38.237 --rc geninfo_unexecuted_blocks=1 00:06:38.237 00:06:38.237 ' 00:06:38.237 03:54:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:38.237 03:54:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:38.237 03:54:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:38.237 03:54:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:38.237 03:54:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.237 03:54:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.237 03:54:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.237 ************************************ 00:06:38.237 START TEST default_locks 00:06:38.237 ************************************ 00:06:38.237 03:54:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:38.237 03:54:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=602608 00:06:38.237 03:54:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 602608 00:06:38.237 03:54:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.237 03:54:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 602608 ']' 00:06:38.237 03:54:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.237 03:54:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.237 03:54:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.237 03:54:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.237 03:54:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.497 [2024-12-10 03:54:32.660423] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:38.497 [2024-12-10 03:54:32.660465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602608 ] 00:06:38.497 [2024-12-10 03:54:32.719590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.497 [2024-12-10 03:54:32.757380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.756 03:54:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.756 03:54:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:38.756 03:54:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 602608 00:06:38.756 03:54:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 602608 00:06:38.756 03:54:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.015 lslocks: write error 00:06:39.015 03:54:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 602608 00:06:39.015 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 602608 ']' 00:06:39.015 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 602608 00:06:39.015 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:39.015 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.015 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602608 00:06:39.015 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.015 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.015 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602608' 00:06:39.015 killing process with pid 602608 00:06:39.015 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 602608 00:06:39.015 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 602608 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 602608 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 602608 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 602608 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 602608 ']' 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.583 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (602608) - No such process 00:06:39.583 ERROR: process (pid: 602608) is no longer running 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.583 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.584 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.584 03:54:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:39.584 03:54:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.584 03:54:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.584 03:54:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.584 00:06:39.584 real 0m1.073s 00:06:39.584 user 0m1.027s 00:06:39.584 sys 0m0.503s 00:06:39.584 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.584 03:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.584 ************************************ 00:06:39.584 END TEST default_locks 00:06:39.584 ************************************ 00:06:39.584 03:54:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:39.584 03:54:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.584 03:54:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.584 03:54:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.584 ************************************ 00:06:39.584 START TEST default_locks_via_rpc 00:06:39.584 ************************************ 00:06:39.584 03:54:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:39.584 03:54:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=602828 00:06:39.584 03:54:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 602828 00:06:39.584 03:54:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.584 03:54:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 602828 ']' 00:06:39.584 03:54:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.584 03:54:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.584 03:54:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.584 03:54:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.584 03:54:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.584 [2024-12-10 03:54:33.801240] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:39.584 [2024-12-10 03:54:33.801289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602828 ] 00:06:39.584 [2024-12-10 03:54:33.857621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.584 [2024-12-10 03:54:33.896577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 602828 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 602828 00:06:39.843 03:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.412 03:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 602828 00:06:40.412 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 602828 ']' 00:06:40.412 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 602828 00:06:40.412 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:40.412 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.412 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602828 00:06:40.412 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.412 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.412 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602828' 00:06:40.412 killing process with pid 602828 00:06:40.412 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 602828 00:06:40.412 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 602828 00:06:40.671 00:06:40.671 real 0m1.202s 00:06:40.671 user 0m1.160s 00:06:40.671 sys 0m0.540s 00:06:40.671 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.671 03:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.671 ************************************ 00:06:40.671 END TEST default_locks_via_rpc 00:06:40.671 ************************************ 00:06:40.671 03:54:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:40.671 03:54:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.671 03:54:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.671 03:54:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.671 ************************************ 00:06:40.671 START TEST non_locking_app_on_locked_coremask 00:06:40.671 ************************************ 00:06:40.671 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:40.671 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.671 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=603116 00:06:40.671 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 603116 /var/tmp/spdk.sock 00:06:40.671 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 603116 ']' 00:06:40.671 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.671 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.671 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.671 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.671 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.929 [2024-12-10 03:54:35.057287] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:40.929 [2024-12-10 03:54:35.057323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603116 ] 00:06:40.929 [2024-12-10 03:54:35.113549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.929 [2024-12-10 03:54:35.152885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.188 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.188 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:41.188 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=603134 00:06:41.188 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 603134 /var/tmp/spdk2.sock 00:06:41.188 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:41.188 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 603134 ']' 00:06:41.188 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.188 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.188 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.188 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.188 03:54:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.188 [2024-12-10 03:54:35.405956] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:41.188 [2024-12-10 03:54:35.406003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603134 ] 00:06:41.188 [2024-12-10 03:54:35.489821] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.188 [2024-12-10 03:54:35.489847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.188 [2024-12-10 03:54:35.571242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.124 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.124 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.124 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 603116 00:06:42.124 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 603116 00:06:42.124 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.383 lslocks: write error 00:06:42.383 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 603116 00:06:42.383 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 603116 ']' 00:06:42.383 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 603116 00:06:42.383 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.383 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.383 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 603116 00:06:42.383 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.383 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.383 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 603116' 00:06:42.383 killing process with pid 603116 00:06:42.383 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 603116 00:06:42.383 03:54:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 603116 00:06:42.951 03:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 603134 00:06:42.951 03:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 603134 ']' 00:06:42.951 03:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 603134 00:06:42.951 03:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.951 03:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.951 03:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 603134 00:06:43.210 03:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.210 03:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.210 03:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 603134' 00:06:43.210 killing process with pid 603134 00:06:43.210 03:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 603134 00:06:43.210 03:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 603134 00:06:43.469 00:06:43.469 real 0m2.612s 00:06:43.469 user 0m2.752s 00:06:43.469 sys 0m0.850s 00:06:43.469 03:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.469 03:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.469 ************************************ 00:06:43.469 END TEST non_locking_app_on_locked_coremask 00:06:43.469 ************************************ 00:06:43.469 03:54:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:43.469 03:54:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.469 03:54:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.469 03:54:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.469 ************************************ 00:06:43.469 START TEST locking_app_on_unlocked_coremask 00:06:43.469 ************************************ 00:06:43.469 03:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:43.469 03:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=603669 00:06:43.470 03:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 603669 /var/tmp/spdk.sock 00:06:43.470 03:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:43.470 03:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 603669 ']' 00:06:43.470 03:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.470 03:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.470 03:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.470 03:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.470 03:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.470 [2024-12-10 03:54:37.744512] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:43.470 [2024-12-10 03:54:37.744547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603669 ] 00:06:43.470 [2024-12-10 03:54:37.800714] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.470 [2024-12-10 03:54:37.800742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.470 [2024-12-10 03:54:37.839877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.729 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.729 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:43.729 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.729 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=603687 00:06:43.729 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 603687 /var/tmp/spdk2.sock 00:06:43.729 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 603687 ']' 00:06:43.729 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.729 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.729 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.729 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.729 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.729 [2024-12-10 03:54:38.071638] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:43.729 [2024-12-10 03:54:38.071682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603687 ] 00:06:43.988 [2024-12-10 03:54:38.155181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.988 [2024-12-10 03:54:38.229352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.556 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.556 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.556 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 603687 00:06:44.556 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 603687 00:06:44.556 03:54:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.124 lslocks: write error 00:06:45.124 03:54:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 603669 00:06:45.124 03:54:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 603669 ']' 00:06:45.124 03:54:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 603669 00:06:45.124 03:54:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.124 03:54:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.124 03:54:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 603669 00:06:45.124 03:54:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.124 03:54:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.124 03:54:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 603669' 00:06:45.124 killing process with pid 603669 00:06:45.124 03:54:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 603669 00:06:45.124 03:54:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 603669 00:06:46.061 03:54:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 603687 00:06:46.061 03:54:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 603687 ']' 00:06:46.061 03:54:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 603687 00:06:46.061 03:54:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:46.061 03:54:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.061 03:54:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 603687 00:06:46.061 03:54:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.061 03:54:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.061 03:54:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 603687' 00:06:46.061 killing process with pid 603687 00:06:46.061 03:54:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 603687 00:06:46.061 03:54:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 603687 00:06:46.321 00:06:46.321 real 0m2.757s 00:06:46.321 user 0m2.896s 00:06:46.321 sys 0m0.909s 00:06:46.321 03:54:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.321 03:54:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.321 ************************************ 00:06:46.321 END TEST locking_app_on_unlocked_coremask 00:06:46.321 ************************************ 00:06:46.321 03:54:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:46.321 03:54:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.321 03:54:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.321 03:54:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.321 ************************************ 00:06:46.321 START TEST locking_app_on_locked_coremask 00:06:46.321 ************************************ 00:06:46.321 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:46.321 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=604237 00:06:46.321 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 604237 /var/tmp/spdk.sock 00:06:46.321 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.321 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 604237 ']' 00:06:46.321 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.321 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.321 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.321 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.321 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.321 [2024-12-10 03:54:40.567684] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:46.321 [2024-12-10 03:54:40.567723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604237 ] 00:06:46.321 [2024-12-10 03:54:40.623302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.321 [2024-12-10 03:54:40.662238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=604249 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 604249 /var/tmp/spdk2.sock 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 604249 /var/tmp/spdk2.sock 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 604249 /var/tmp/spdk2.sock 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 604249 ']' 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.585 03:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.585 [2024-12-10 03:54:40.913567] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:46.585 [2024-12-10 03:54:40.913613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604249 ] 00:06:46.844 [2024-12-10 03:54:40.998262] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 604237 has claimed it. 00:06:46.844 [2024-12-10 03:54:40.998298] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.412 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (604249) - No such process 00:06:47.412 ERROR: process (pid: 604249) is no longer running 00:06:47.412 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.412 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:47.412 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:47.412 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.412 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.412 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.412 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 604237 00:06:47.412 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 604237 00:06:47.412 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.671 lslocks: write error 00:06:47.671 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 604237 00:06:47.671 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 604237 ']' 00:06:47.671 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 604237 00:06:47.671 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:47.671 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.671 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 604237 00:06:47.671 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.671 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.671 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 604237' 00:06:47.671 killing process with pid 604237 00:06:47.671 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 604237 00:06:47.671 03:54:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 604237 00:06:47.931 00:06:47.931 real 0m1.771s 00:06:47.931 user 0m1.888s 00:06:47.931 sys 0m0.601s 00:06:47.931 03:54:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.931 03:54:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.931 ************************************ 00:06:47.931 END TEST locking_app_on_locked_coremask 00:06:47.931 ************************************ 00:06:48.190 03:54:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:48.190 03:54:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.190 03:54:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.190 03:54:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.190 ************************************ 00:06:48.190 START TEST locking_overlapped_coremask 00:06:48.190 ************************************ 00:06:48.190 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:48.190 03:54:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=604540 00:06:48.190 03:54:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 604540 /var/tmp/spdk.sock 00:06:48.190 03:54:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:48.190 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 604540 ']' 00:06:48.190 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.190 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.190 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.190 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.190 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.190 [2024-12-10 03:54:42.405468] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:48.190 [2024-12-10 03:54:42.405506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604540 ] 00:06:48.190 [2024-12-10 03:54:42.463438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.190 [2024-12-10 03:54:42.501432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.190 [2024-12-10 03:54:42.501534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.190 [2024-12-10 03:54:42.501536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.449 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.449 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:48.449 03:54:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=604703 00:06:48.449 03:54:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 604703 /var/tmp/spdk2.sock 00:06:48.449 03:54:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:48.449 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:48.449 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 604703 /var/tmp/spdk2.sock 00:06:48.449 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:48.449 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.450 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:48.450 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.450 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 604703 /var/tmp/spdk2.sock 00:06:48.450 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 604703 ']' 00:06:48.450 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.450 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.450 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.450 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.450 03:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.450 [2024-12-10 03:54:42.759839] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:48.450 [2024-12-10 03:54:42.759883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604703 ] 00:06:48.708 [2024-12-10 03:54:42.847321] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 604540 has claimed it. 00:06:48.708 [2024-12-10 03:54:42.847360] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.276 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (604703) - No such process 00:06:49.276 ERROR: process (pid: 604703) is no longer running 00:06:49.276 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 604540 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 604540 ']' 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 604540 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 604540 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 604540' 00:06:49.277 killing process with pid 604540 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 604540 00:06:49.277 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 604540 00:06:49.536 00:06:49.536 real 0m1.392s 00:06:49.536 user 0m3.862s 00:06:49.536 sys 0m0.378s 00:06:49.536 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.536 03:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.536 ************************************ 00:06:49.536 END TEST locking_overlapped_coremask 00:06:49.536 ************************************ 00:06:49.536 03:54:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:49.536 03:54:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.536 03:54:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.536 03:54:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.536 ************************************ 00:06:49.536 START TEST locking_overlapped_coremask_via_rpc 00:06:49.536 ************************************ 00:06:49.536 03:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:49.536 03:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=604835 00:06:49.536 03:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 604835 /var/tmp/spdk.sock 00:06:49.536 03:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:49.536 03:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 604835 ']' 00:06:49.536 03:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.536 03:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.536 03:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.536 03:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.536 03:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.536 [2024-12-10 03:54:43.866719] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:49.536 [2024-12-10 03:54:43.866761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604835 ] 00:06:49.795 [2024-12-10 03:54:43.927006] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.795 [2024-12-10 03:54:43.927029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.795 [2024-12-10 03:54:43.967217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.795 [2024-12-10 03:54:43.967301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.795 [2024-12-10 03:54:43.967303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.795 03:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.795 03:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.795 03:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=605033 00:06:49.795 03:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 605033 /var/tmp/spdk2.sock 00:06:49.795 03:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:49.795 03:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 605033 ']' 00:06:49.795 03:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.795 03:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.795 03:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.795 03:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.795 03:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.054 [2024-12-10 03:54:44.226512] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:50.054 [2024-12-10 03:54:44.226560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605033 ] 00:06:50.054 [2024-12-10 03:54:44.309920] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.054 [2024-12-10 03:54:44.309949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.054 [2024-12-10 03:54:44.389979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.054 [2024-12-10 03:54:44.397309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.054 [2024-12-10 03:54:44.397310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.991 [2024-12-10 03:54:45.053334] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 604835 has claimed it. 00:06:50.991 request: 00:06:50.991 { 00:06:50.991 "method": "framework_enable_cpumask_locks", 00:06:50.991 "req_id": 1 00:06:50.991 } 00:06:50.991 Got JSON-RPC error response 00:06:50.991 response: 00:06:50.991 { 00:06:50.991 "code": -32603, 00:06:50.991 "message": "Failed to claim CPU core: 2" 00:06:50.991 } 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 604835 /var/tmp/spdk.sock 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 604835 ']' 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 605033 /var/tmp/spdk2.sock 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 605033 ']' 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.991 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.250 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.250 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.250 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:51.250 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.250 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.250 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.250 00:06:51.250 real 0m1.614s 00:06:51.250 user 0m0.746s 00:06:51.250 sys 0m0.135s 00:06:51.250 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.250 03:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.250 ************************************ 00:06:51.250 END TEST locking_overlapped_coremask_via_rpc 00:06:51.250 ************************************ 00:06:51.250 03:54:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:51.250 03:54:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 604835 ]] 00:06:51.250 03:54:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 604835 00:06:51.250 03:54:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 604835 ']' 00:06:51.250 03:54:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 604835 00:06:51.250 03:54:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:51.250 03:54:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.250 03:54:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 604835 00:06:51.250 03:54:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.250 03:54:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.250 03:54:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 604835' 00:06:51.250 killing process with pid 604835 00:06:51.250 03:54:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 604835 00:06:51.250 03:54:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 604835 00:06:51.509 03:54:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 605033 ]] 00:06:51.509 03:54:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 605033 00:06:51.509 03:54:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 605033 ']' 00:06:51.509 03:54:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 605033 00:06:51.509 03:54:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:51.509 03:54:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.509 03:54:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 605033 00:06:51.509 03:54:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:51.509 03:54:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:51.509 03:54:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 605033' 00:06:51.509 killing process with pid 605033 00:06:51.509 03:54:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 605033 00:06:51.509 03:54:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 605033 00:06:52.077 03:54:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.077 03:54:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:52.077 03:54:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 604835 ]] 00:06:52.077 03:54:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 604835 00:06:52.077 03:54:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 604835 ']' 00:06:52.077 03:54:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 604835 00:06:52.077 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (604835) - No such process 00:06:52.077 03:54:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 604835 is not found' 00:06:52.077 Process with pid 604835 is not found 00:06:52.077 03:54:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 605033 ]] 00:06:52.077 03:54:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 605033 00:06:52.077 03:54:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 605033 ']' 00:06:52.077 03:54:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 605033 00:06:52.077 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (605033) - No such process 00:06:52.078 03:54:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 605033 is not found' 00:06:52.078 Process with pid 605033 is not found 00:06:52.078 03:54:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.078 00:06:52.078 real 0m13.774s 00:06:52.078 user 0m23.690s 00:06:52.078 sys 0m4.863s 00:06:52.078 03:54:46 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.078 03:54:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.078 ************************************ 00:06:52.078 END TEST cpu_locks 00:06:52.078 ************************************ 00:06:52.078 00:06:52.078 real 0m37.137s 00:06:52.078 user 1m9.602s 00:06:52.078 sys 0m8.161s 00:06:52.078 03:54:46 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.078 03:54:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.078 ************************************ 00:06:52.078 END TEST event 00:06:52.078 ************************************ 00:06:52.078 03:54:46 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:52.078 03:54:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.078 03:54:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.078 03:54:46 -- common/autotest_common.sh@10 -- # set +x 00:06:52.078 ************************************ 00:06:52.078 START TEST thread 00:06:52.078 ************************************ 00:06:52.078 03:54:46 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:52.078 * Looking for test storage... 00:06:52.078 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:52.078 03:54:46 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:52.078 03:54:46 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:52.078 03:54:46 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:52.078 03:54:46 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:52.078 03:54:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.078 03:54:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.078 03:54:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.078 03:54:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.078 03:54:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.078 03:54:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.078 03:54:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.078 03:54:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.078 03:54:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.078 03:54:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.078 03:54:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.078 03:54:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:52.078 03:54:46 thread -- scripts/common.sh@345 -- # : 1 00:06:52.078 03:54:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.078 03:54:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.078 03:54:46 thread -- scripts/common.sh@365 -- # decimal 1 00:06:52.078 03:54:46 thread -- scripts/common.sh@353 -- # local d=1 00:06:52.078 03:54:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.078 03:54:46 thread -- scripts/common.sh@355 -- # echo 1 00:06:52.078 03:54:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.078 03:54:46 thread -- scripts/common.sh@366 -- # decimal 2 00:06:52.078 03:54:46 thread -- scripts/common.sh@353 -- # local d=2 00:06:52.078 03:54:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.078 03:54:46 thread -- scripts/common.sh@355 -- # echo 2 00:06:52.078 03:54:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.078 03:54:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.078 03:54:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.078 03:54:46 thread -- scripts/common.sh@368 -- # return 0 00:06:52.078 03:54:46 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.078 03:54:46 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:52.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.078 --rc genhtml_branch_coverage=1 00:06:52.078 --rc genhtml_function_coverage=1 00:06:52.078 --rc genhtml_legend=1 00:06:52.078 --rc geninfo_all_blocks=1 00:06:52.078 --rc geninfo_unexecuted_blocks=1 00:06:52.078 00:06:52.078 ' 00:06:52.078 03:54:46 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:52.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.078 --rc genhtml_branch_coverage=1 00:06:52.078 --rc genhtml_function_coverage=1 00:06:52.078 --rc genhtml_legend=1 00:06:52.078 --rc geninfo_all_blocks=1 00:06:52.078 --rc geninfo_unexecuted_blocks=1 00:06:52.078 00:06:52.078 ' 00:06:52.078 03:54:46 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:52.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.078 --rc genhtml_branch_coverage=1 00:06:52.078 --rc genhtml_function_coverage=1 00:06:52.078 --rc genhtml_legend=1 00:06:52.078 --rc geninfo_all_blocks=1 00:06:52.078 --rc geninfo_unexecuted_blocks=1 00:06:52.078 00:06:52.078 ' 00:06:52.078 03:54:46 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:52.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.078 --rc genhtml_branch_coverage=1 00:06:52.078 --rc genhtml_function_coverage=1 00:06:52.078 --rc genhtml_legend=1 00:06:52.078 --rc geninfo_all_blocks=1 00:06:52.078 --rc geninfo_unexecuted_blocks=1 00:06:52.078 00:06:52.078 ' 00:06:52.078 03:54:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.078 03:54:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:52.078 03:54:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.078 03:54:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.337 ************************************ 00:06:52.337 START TEST thread_poller_perf 00:06:52.337 ************************************ 00:06:52.337 03:54:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.338 [2024-12-10 03:54:46.490332] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:52.338 [2024-12-10 03:54:46.490388] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605474 ] 00:06:52.338 [2024-12-10 03:54:46.551708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.338 [2024-12-10 03:54:46.588405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.338 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:53.275 [2024-12-10T02:54:47.664Z] ====================================== 00:06:53.275 [2024-12-10T02:54:47.664Z] busy:2710570290 (cyc) 00:06:53.275 [2024-12-10T02:54:47.664Z] total_run_count: 454000 00:06:53.275 [2024-12-10T02:54:47.664Z] tsc_hz: 2700000000 (cyc) 00:06:53.275 [2024-12-10T02:54:47.664Z] ====================================== 00:06:53.275 [2024-12-10T02:54:47.664Z] poller_cost: 5970 (cyc), 2211 (nsec) 00:06:53.275 00:06:53.275 real 0m1.161s 00:06:53.275 user 0m1.096s 00:06:53.275 sys 0m0.062s 00:06:53.275 03:54:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.275 03:54:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.275 ************************************ 00:06:53.275 END TEST thread_poller_perf 00:06:53.275 ************************************ 00:06:53.535 03:54:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.535 03:54:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:53.535 03:54:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.535 03:54:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.535 ************************************ 00:06:53.535 START TEST thread_poller_perf 00:06:53.535 ************************************ 00:06:53.535 03:54:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.535 [2024-12-10 03:54:47.713911] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:53.535 [2024-12-10 03:54:47.713967] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605754 ] 00:06:53.535 [2024-12-10 03:54:47.774059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.535 [2024-12-10 03:54:47.810611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.535 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:54.472 [2024-12-10T02:54:48.861Z] ====================================== 00:06:54.472 [2024-12-10T02:54:48.861Z] busy:2701884516 (cyc) 00:06:54.472 [2024-12-10T02:54:48.861Z] total_run_count: 5626000 00:06:54.472 [2024-12-10T02:54:48.861Z] tsc_hz: 2700000000 (cyc) 00:06:54.472 [2024-12-10T02:54:48.861Z] ====================================== 00:06:54.472 [2024-12-10T02:54:48.861Z] poller_cost: 480 (cyc), 177 (nsec) 00:06:54.472 00:06:54.472 real 0m1.152s 00:06:54.472 user 0m1.088s 00:06:54.472 sys 0m0.061s 00:06:54.472 03:54:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.472 03:54:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.472 ************************************ 00:06:54.472 END TEST thread_poller_perf 00:06:54.472 ************************************ 00:06:54.732 03:54:48 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:54.732 00:06:54.732 real 0m2.608s 00:06:54.732 user 0m2.320s 00:06:54.732 sys 0m0.301s 00:06:54.732 03:54:48 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.732 03:54:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.732 ************************************ 00:06:54.732 END TEST thread 00:06:54.732 ************************************ 00:06:54.732 03:54:48 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:54.732 03:54:48 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:54.732 03:54:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.732 03:54:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.732 03:54:48 -- common/autotest_common.sh@10 -- # set +x 00:06:54.732 ************************************ 00:06:54.732 START TEST app_cmdline 00:06:54.732 ************************************ 00:06:54.732 03:54:48 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:06:54.732 * Looking for test storage... 00:06:54.732 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.732 03:54:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:54.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.732 --rc genhtml_branch_coverage=1 00:06:54.732 --rc genhtml_function_coverage=1 00:06:54.732 --rc genhtml_legend=1 00:06:54.732 --rc geninfo_all_blocks=1 00:06:54.732 --rc geninfo_unexecuted_blocks=1 00:06:54.732 00:06:54.732 ' 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:54.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.732 --rc genhtml_branch_coverage=1 00:06:54.732 --rc genhtml_function_coverage=1 00:06:54.732 --rc genhtml_legend=1 00:06:54.732 --rc geninfo_all_blocks=1 00:06:54.732 --rc geninfo_unexecuted_blocks=1 00:06:54.732 00:06:54.732 ' 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:54.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.732 --rc genhtml_branch_coverage=1 00:06:54.732 --rc genhtml_function_coverage=1 00:06:54.732 --rc genhtml_legend=1 00:06:54.732 --rc geninfo_all_blocks=1 00:06:54.732 --rc geninfo_unexecuted_blocks=1 00:06:54.732 00:06:54.732 ' 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:54.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.732 --rc genhtml_branch_coverage=1 00:06:54.732 --rc genhtml_function_coverage=1 00:06:54.732 --rc genhtml_legend=1 00:06:54.732 --rc geninfo_all_blocks=1 00:06:54.732 --rc geninfo_unexecuted_blocks=1 00:06:54.732 00:06:54.732 ' 00:06:54.732 03:54:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:54.732 03:54:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=606086 00:06:54.732 03:54:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 606086 00:06:54.732 03:54:49 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 606086 ']' 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.732 03:54:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.992 [2024-12-10 03:54:49.148124] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:06:54.992 [2024-12-10 03:54:49.148168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid606086 ] 00:06:54.992 [2024-12-10 03:54:49.203857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.992 [2024-12-10 03:54:49.240565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.251 03:54:49 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.251 03:54:49 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:55.251 03:54:49 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:55.251 { 00:06:55.251 "version": "SPDK v25.01-pre git sha1 86d35c37a", 00:06:55.251 "fields": { 00:06:55.251 "major": 25, 00:06:55.251 "minor": 1, 00:06:55.251 "patch": 0, 00:06:55.251 "suffix": "-pre", 00:06:55.251 "commit": "86d35c37a" 00:06:55.251 } 00:06:55.251 } 00:06:55.251 03:54:49 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:55.251 03:54:49 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:55.251 03:54:49 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:55.251 03:54:49 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:55.251 03:54:49 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:55.251 03:54:49 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:55.251 03:54:49 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.251 03:54:49 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:55.251 03:54:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.251 03:54:49 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.510 03:54:49 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:55.511 03:54:49 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:55.511 03:54:49 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.511 request: 00:06:55.511 { 00:06:55.511 "method": "env_dpdk_get_mem_stats", 00:06:55.511 "req_id": 1 00:06:55.511 } 00:06:55.511 Got JSON-RPC error response 00:06:55.511 response: 00:06:55.511 { 00:06:55.511 "code": -32601, 00:06:55.511 "message": "Method not found" 00:06:55.511 } 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.511 03:54:49 app_cmdline -- app/cmdline.sh@1 -- # killprocess 606086 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 606086 ']' 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 606086 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 606086 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 606086' 00:06:55.511 killing process with pid 606086 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@973 -- # kill 606086 00:06:55.511 03:54:49 app_cmdline -- common/autotest_common.sh@978 -- # wait 606086 00:06:56.079 00:06:56.079 real 0m1.229s 00:06:56.079 user 0m1.391s 00:06:56.079 sys 0m0.425s 00:06:56.079 03:54:50 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.079 03:54:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.079 ************************************ 00:06:56.079 END TEST app_cmdline 00:06:56.079 ************************************ 00:06:56.079 03:54:50 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:56.079 03:54:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.079 03:54:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.079 03:54:50 -- common/autotest_common.sh@10 -- # set +x 00:06:56.079 ************************************ 00:06:56.079 START TEST version 00:06:56.079 ************************************ 00:06:56.079 03:54:50 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:06:56.079 * Looking for test storage... 00:06:56.079 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:06:56.079 03:54:50 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:56.079 03:54:50 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:56.079 03:54:50 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:56.079 03:54:50 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:56.079 03:54:50 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.079 03:54:50 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.079 03:54:50 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.079 03:54:50 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.079 03:54:50 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.079 03:54:50 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.079 03:54:50 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.079 03:54:50 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.079 03:54:50 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.079 03:54:50 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.079 03:54:50 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.079 03:54:50 version -- scripts/common.sh@344 -- # case "$op" in 00:06:56.079 03:54:50 version -- scripts/common.sh@345 -- # : 1 00:06:56.079 03:54:50 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.079 03:54:50 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.079 03:54:50 version -- scripts/common.sh@365 -- # decimal 1 00:06:56.079 03:54:50 version -- scripts/common.sh@353 -- # local d=1 00:06:56.079 03:54:50 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.079 03:54:50 version -- scripts/common.sh@355 -- # echo 1 00:06:56.079 03:54:50 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.079 03:54:50 version -- scripts/common.sh@366 -- # decimal 2 00:06:56.079 03:54:50 version -- scripts/common.sh@353 -- # local d=2 00:06:56.079 03:54:50 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.079 03:54:50 version -- scripts/common.sh@355 -- # echo 2 00:06:56.079 03:54:50 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.079 03:54:50 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.079 03:54:50 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.079 03:54:50 version -- scripts/common.sh@368 -- # return 0 00:06:56.079 03:54:50 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.079 03:54:50 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:56.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.079 --rc genhtml_branch_coverage=1 00:06:56.079 --rc genhtml_function_coverage=1 00:06:56.079 --rc genhtml_legend=1 00:06:56.079 --rc geninfo_all_blocks=1 00:06:56.079 --rc geninfo_unexecuted_blocks=1 00:06:56.079 00:06:56.079 ' 00:06:56.079 03:54:50 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:56.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.079 --rc genhtml_branch_coverage=1 00:06:56.079 --rc genhtml_function_coverage=1 00:06:56.079 --rc genhtml_legend=1 00:06:56.079 --rc geninfo_all_blocks=1 00:06:56.079 --rc geninfo_unexecuted_blocks=1 00:06:56.079 00:06:56.079 ' 00:06:56.079 03:54:50 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:56.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.079 --rc genhtml_branch_coverage=1 00:06:56.079 --rc genhtml_function_coverage=1 00:06:56.079 --rc genhtml_legend=1 00:06:56.079 --rc geninfo_all_blocks=1 00:06:56.079 --rc geninfo_unexecuted_blocks=1 00:06:56.079 00:06:56.079 ' 00:06:56.079 03:54:50 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:56.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.079 --rc genhtml_branch_coverage=1 00:06:56.079 --rc genhtml_function_coverage=1 00:06:56.079 --rc genhtml_legend=1 00:06:56.079 --rc geninfo_all_blocks=1 00:06:56.079 --rc geninfo_unexecuted_blocks=1 00:06:56.079 00:06:56.079 ' 00:06:56.079 03:54:50 version -- app/version.sh@17 -- # get_header_version major 00:06:56.079 03:54:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:56.079 03:54:50 version -- app/version.sh@14 -- # cut -f2 00:06:56.079 03:54:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.079 03:54:50 version -- app/version.sh@17 -- # major=25 00:06:56.079 03:54:50 version -- app/version.sh@18 -- # get_header_version minor 00:06:56.079 03:54:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:56.079 03:54:50 version -- app/version.sh@14 -- # cut -f2 00:06:56.079 03:54:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.079 03:54:50 version -- app/version.sh@18 -- # minor=1 00:06:56.079 03:54:50 version -- app/version.sh@19 -- # get_header_version patch 00:06:56.079 03:54:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:56.079 03:54:50 version -- app/version.sh@14 -- # cut -f2 00:06:56.079 03:54:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.079 03:54:50 version -- app/version.sh@19 -- # patch=0 00:06:56.079 03:54:50 version -- app/version.sh@20 -- # get_header_version suffix 00:06:56.079 03:54:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:06:56.079 03:54:50 version -- app/version.sh@14 -- # cut -f2 00:06:56.079 03:54:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.079 03:54:50 version -- app/version.sh@20 -- # suffix=-pre 00:06:56.079 03:54:50 version -- app/version.sh@22 -- # version=25.1 00:06:56.079 03:54:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:56.079 03:54:50 version -- app/version.sh@28 -- # version=25.1rc0 00:06:56.079 03:54:50 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:06:56.079 03:54:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:56.079 03:54:50 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:56.079 03:54:50 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:56.079 00:06:56.079 real 0m0.215s 00:06:56.079 user 0m0.128s 00:06:56.079 sys 0m0.129s 00:06:56.079 03:54:50 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.079 03:54:50 version -- common/autotest_common.sh@10 -- # set +x 00:06:56.079 ************************************ 00:06:56.079 END TEST version 00:06:56.079 ************************************ 00:06:56.339 03:54:50 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:56.339 03:54:50 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:56.339 03:54:50 -- spdk/autotest.sh@194 -- # uname -s 00:06:56.339 03:54:50 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:56.339 03:54:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:56.339 03:54:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:56.339 03:54:50 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:56.339 03:54:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:56.339 03:54:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:56.339 03:54:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.339 03:54:50 -- common/autotest_common.sh@10 -- # set +x 00:06:56.339 03:54:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:56.339 03:54:50 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:56.339 03:54:50 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:56.339 03:54:50 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:56.339 03:54:50 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:06:56.339 03:54:50 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:56.339 03:54:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.339 03:54:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.339 03:54:50 -- common/autotest_common.sh@10 -- # set +x 00:06:56.339 ************************************ 00:06:56.339 START TEST nvmf_rdma 00:06:56.339 ************************************ 00:06:56.339 03:54:50 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:56.339 * Looking for test storage... 00:06:56.339 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:56.339 03:54:50 nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:56.339 03:54:50 nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:06:56.339 03:54:50 nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:56.339 03:54:50 nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.339 03:54:50 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:06:56.599 03:54:50 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:06:56.599 03:54:50 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.599 03:54:50 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:06:56.599 03:54:50 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.599 03:54:50 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.599 03:54:50 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.599 03:54:50 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:06:56.599 03:54:50 nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.599 03:54:50 nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:56.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.599 --rc genhtml_branch_coverage=1 00:06:56.599 --rc genhtml_function_coverage=1 00:06:56.599 --rc genhtml_legend=1 00:06:56.599 --rc geninfo_all_blocks=1 00:06:56.599 --rc geninfo_unexecuted_blocks=1 00:06:56.599 00:06:56.599 ' 00:06:56.599 03:54:50 nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:56.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.599 --rc genhtml_branch_coverage=1 00:06:56.599 --rc genhtml_function_coverage=1 00:06:56.599 --rc genhtml_legend=1 00:06:56.599 --rc geninfo_all_blocks=1 00:06:56.599 --rc geninfo_unexecuted_blocks=1 00:06:56.599 00:06:56.599 ' 00:06:56.599 03:54:50 nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:56.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.599 --rc genhtml_branch_coverage=1 00:06:56.599 --rc genhtml_function_coverage=1 00:06:56.599 --rc genhtml_legend=1 00:06:56.599 --rc geninfo_all_blocks=1 00:06:56.599 --rc geninfo_unexecuted_blocks=1 00:06:56.599 00:06:56.599 ' 00:06:56.599 03:54:50 nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:56.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.599 --rc genhtml_branch_coverage=1 00:06:56.599 --rc genhtml_function_coverage=1 00:06:56.599 --rc genhtml_legend=1 00:06:56.599 --rc geninfo_all_blocks=1 00:06:56.599 --rc geninfo_unexecuted_blocks=1 00:06:56.599 00:06:56.599 ' 00:06:56.599 03:54:50 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:06:56.599 03:54:50 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:56.599 03:54:50 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:06:56.599 03:54:50 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.599 03:54:50 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.599 03:54:50 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:56.599 ************************************ 00:06:56.599 START TEST nvmf_target_core 00:06:56.599 ************************************ 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:06:56.599 * Looking for test storage... 00:06:56.599 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.599 03:54:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:56.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.600 --rc genhtml_branch_coverage=1 00:06:56.600 --rc genhtml_function_coverage=1 00:06:56.600 --rc genhtml_legend=1 00:06:56.600 --rc geninfo_all_blocks=1 00:06:56.600 --rc geninfo_unexecuted_blocks=1 00:06:56.600 00:06:56.600 ' 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:56.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.600 --rc genhtml_branch_coverage=1 00:06:56.600 --rc genhtml_function_coverage=1 00:06:56.600 --rc genhtml_legend=1 00:06:56.600 --rc geninfo_all_blocks=1 00:06:56.600 --rc geninfo_unexecuted_blocks=1 00:06:56.600 00:06:56.600 ' 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:56.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.600 --rc genhtml_branch_coverage=1 00:06:56.600 --rc genhtml_function_coverage=1 00:06:56.600 --rc genhtml_legend=1 00:06:56.600 --rc geninfo_all_blocks=1 00:06:56.600 --rc geninfo_unexecuted_blocks=1 00:06:56.600 00:06:56.600 ' 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:56.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.600 --rc genhtml_branch_coverage=1 00:06:56.600 --rc genhtml_function_coverage=1 00:06:56.600 --rc genhtml_legend=1 00:06:56.600 --rc geninfo_all_blocks=1 00:06:56.600 --rc geninfo_unexecuted_blocks=1 00:06:56.600 00:06:56.600 ' 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:56.600 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.600 03:54:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:56.860 ************************************ 00:06:56.860 START TEST nvmf_abort 00:06:56.860 ************************************ 00:06:56.860 03:54:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:06:56.860 * Looking for test storage... 00:06:56.860 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:56.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.860 --rc genhtml_branch_coverage=1 00:06:56.860 --rc genhtml_function_coverage=1 00:06:56.860 --rc genhtml_legend=1 00:06:56.860 --rc geninfo_all_blocks=1 00:06:56.860 --rc geninfo_unexecuted_blocks=1 00:06:56.860 00:06:56.860 ' 00:06:56.860 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:56.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.861 --rc genhtml_branch_coverage=1 00:06:56.861 --rc genhtml_function_coverage=1 00:06:56.861 --rc genhtml_legend=1 00:06:56.861 --rc geninfo_all_blocks=1 00:06:56.861 --rc geninfo_unexecuted_blocks=1 00:06:56.861 00:06:56.861 ' 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:56.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.861 --rc genhtml_branch_coverage=1 00:06:56.861 --rc genhtml_function_coverage=1 00:06:56.861 --rc genhtml_legend=1 00:06:56.861 --rc geninfo_all_blocks=1 00:06:56.861 --rc geninfo_unexecuted_blocks=1 00:06:56.861 00:06:56.861 ' 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:56.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.861 --rc genhtml_branch_coverage=1 00:06:56.861 --rc genhtml_function_coverage=1 00:06:56.861 --rc genhtml_legend=1 00:06:56.861 --rc geninfo_all_blocks=1 00:06:56.861 --rc geninfo_unexecuted_blocks=1 00:06:56.861 00:06:56.861 ' 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:56.861 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:56.861 03:54:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:03.434 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:03.434 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:03.434 Found net devices under 0000:18:00.0: mlx_0_0 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:03.434 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:03.435 Found net devices under 0000:18:00.1: mlx_0_1 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:03.435 03:54:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:03.435 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:03.435 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:07:03.435 altname enp24s0f0np0 00:07:03.435 altname ens785f0np0 00:07:03.435 inet 192.168.100.8/24 scope global mlx_0_0 00:07:03.435 valid_lft forever preferred_lft forever 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:03.435 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:03.435 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:07:03.435 altname enp24s0f1np1 00:07:03.435 altname ens785f1np1 00:07:03.435 inet 192.168.100.9/24 scope global mlx_0_1 00:07:03.435 valid_lft forever preferred_lft forever 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:03.435 192.168.100.9' 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:03.435 192.168.100.9' 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:03.435 192.168.100.9' 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:03.435 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=609955 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 609955 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 609955 ']' 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.436 [2024-12-10 03:54:57.212397] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:03.436 [2024-12-10 03:54:57.212443] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.436 [2024-12-10 03:54:57.271213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.436 [2024-12-10 03:54:57.310782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.436 [2024-12-10 03:54:57.310817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.436 [2024-12-10 03:54:57.310824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.436 [2024-12-10 03:54:57.310830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.436 [2024-12-10 03:54:57.310836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.436 [2024-12-10 03:54:57.311975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.436 [2024-12-10 03:54:57.312070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.436 [2024-12-10 03:54:57.312072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.436 [2024-12-10 03:54:57.477447] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2009800/0x200dcf0) succeed. 00:07:03.436 [2024-12-10 03:54:57.493038] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x200adf0/0x204f390) succeed. 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.436 Malloc0 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.436 Delay0 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.436 [2024-12-10 03:54:57.651837] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.436 03:54:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:03.436 [2024-12-10 03:54:57.752018] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:05.972 Initializing NVMe Controllers 00:07:05.972 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:05.972 controller IO queue size 128 less than required 00:07:05.972 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:05.972 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:05.972 Initialization complete. Launching workers. 00:07:05.972 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 46791 00:07:05.972 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 46852, failed to submit 62 00:07:05.972 success 46792, unsuccessful 60, failed 0 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:05.972 rmmod nvme_rdma 00:07:05.972 rmmod nvme_fabrics 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 609955 ']' 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 609955 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 609955 ']' 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 609955 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 609955 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 609955' 00:07:05.972 killing process with pid 609955 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 609955 00:07:05.972 03:54:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 609955 00:07:05.972 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:05.972 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:05.972 00:07:05.972 real 0m9.207s 00:07:05.972 user 0m12.525s 00:07:05.972 sys 0m4.840s 00:07:05.972 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.972 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.972 ************************************ 00:07:05.972 END TEST nvmf_abort 00:07:05.972 ************************************ 00:07:05.972 03:55:00 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:05.972 03:55:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.972 03:55:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.972 03:55:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:05.972 ************************************ 00:07:05.972 START TEST nvmf_ns_hotplug_stress 00:07:05.972 ************************************ 00:07:05.972 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:05.972 * Looking for test storage... 00:07:05.972 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:05.972 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:05.972 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:05.972 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.232 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:06.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.233 --rc genhtml_branch_coverage=1 00:07:06.233 --rc genhtml_function_coverage=1 00:07:06.233 --rc genhtml_legend=1 00:07:06.233 --rc geninfo_all_blocks=1 00:07:06.233 --rc geninfo_unexecuted_blocks=1 00:07:06.233 00:07:06.233 ' 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:06.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.233 --rc genhtml_branch_coverage=1 00:07:06.233 --rc genhtml_function_coverage=1 00:07:06.233 --rc genhtml_legend=1 00:07:06.233 --rc geninfo_all_blocks=1 00:07:06.233 --rc geninfo_unexecuted_blocks=1 00:07:06.233 00:07:06.233 ' 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:06.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.233 --rc genhtml_branch_coverage=1 00:07:06.233 --rc genhtml_function_coverage=1 00:07:06.233 --rc genhtml_legend=1 00:07:06.233 --rc geninfo_all_blocks=1 00:07:06.233 --rc geninfo_unexecuted_blocks=1 00:07:06.233 00:07:06.233 ' 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:06.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.233 --rc genhtml_branch_coverage=1 00:07:06.233 --rc genhtml_function_coverage=1 00:07:06.233 --rc genhtml_legend=1 00:07:06.233 --rc geninfo_all_blocks=1 00:07:06.233 --rc geninfo_unexecuted_blocks=1 00:07:06.233 00:07:06.233 ' 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.233 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:06.233 03:55:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:11.510 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:11.510 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:11.510 Found net devices under 0000:18:00.0: mlx_0_0 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:11.510 Found net devices under 0000:18:00.1: mlx_0_1 00:07:11.510 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:11.511 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:11.511 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:07:11.511 altname enp24s0f0np0 00:07:11.511 altname ens785f0np0 00:07:11.511 inet 192.168.100.8/24 scope global mlx_0_0 00:07:11.511 valid_lft forever preferred_lft forever 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:11.511 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:11.511 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:07:11.511 altname enp24s0f1np1 00:07:11.511 altname ens785f1np1 00:07:11.511 inet 192.168.100.9/24 scope global mlx_0_1 00:07:11.511 valid_lft forever preferred_lft forever 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:11.511 192.168.100.9' 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:11.511 192.168.100.9' 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:11.511 192.168.100.9' 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:11.511 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=613579 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 613579 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 613579 ']' 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:11.512 [2024-12-10 03:55:05.587642] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:11.512 [2024-12-10 03:55:05.587691] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.512 [2024-12-10 03:55:05.646393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.512 [2024-12-10 03:55:05.685074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.512 [2024-12-10 03:55:05.685107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.512 [2024-12-10 03:55:05.685114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.512 [2024-12-10 03:55:05.685120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.512 [2024-12-10 03:55:05.685124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.512 [2024-12-10 03:55:05.686235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.512 [2024-12-10 03:55:05.686318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.512 [2024-12-10 03:55:05.686320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:11.512 03:55:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:11.771 [2024-12-10 03:55:06.009330] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x993800/0x997cf0) succeed. 00:07:11.771 [2024-12-10 03:55:06.017504] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x994df0/0x9d9390) succeed. 00:07:11.771 03:55:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:12.030 03:55:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:12.288 [2024-12-10 03:55:06.470072] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:12.288 03:55:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:12.547 03:55:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:12.547 Malloc0 00:07:12.547 03:55:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:12.806 Delay0 00:07:12.806 03:55:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.065 03:55:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:13.065 NULL1 00:07:13.065 03:55:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:13.323 03:55:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=614123 00:07:13.323 03:55:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:13.323 03:55:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:13.323 03:55:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.701 Read completed with error (sct=0, sc=11) 00:07:14.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.701 03:55:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.701 03:55:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:14.701 03:55:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:14.960 true 00:07:14.960 03:55:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:14.960 03:55:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.897 03:55:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.897 03:55:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:15.897 03:55:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:16.156 true 00:07:16.156 03:55:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:16.156 03:55:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.093 03:55:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.093 03:55:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:17.093 03:55:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:17.352 true 00:07:17.352 03:55:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:17.352 03:55:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.289 03:55:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.289 03:55:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:18.289 03:55:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:18.551 true 00:07:18.551 03:55:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:18.551 03:55:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.494 03:55:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.494 03:55:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:19.494 03:55:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:19.753 true 00:07:19.753 03:55:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:19.753 03:55:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.689 03:55:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.689 03:55:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:20.689 03:55:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:20.947 true 00:07:20.947 03:55:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:20.947 03:55:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.882 03:55:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.882 03:55:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:21.882 03:55:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:22.141 true 00:07:22.141 03:55:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:22.141 03:55:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.078 03:55:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.078 03:55:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:23.078 03:55:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:23.337 true 00:07:23.337 03:55:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:23.338 03:55:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.274 03:55:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.274 03:55:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:24.274 03:55:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:24.533 true 00:07:24.533 03:55:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:24.533 03:55:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.469 03:55:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.469 03:55:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:25.469 03:55:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:25.727 true 00:07:25.727 03:55:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:25.727 03:55:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.664 03:55:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.664 03:55:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:26.664 03:55:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:26.922 true 00:07:26.922 03:55:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:26.922 03:55:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.859 03:55:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.118 03:55:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:28.118 03:55:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:28.118 true 00:07:28.118 03:55:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:28.118 03:55:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.055 03:55:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.314 03:55:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:29.314 03:55:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:29.314 true 00:07:29.314 03:55:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:29.314 03:55:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.250 03:55:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.509 03:55:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:30.509 03:55:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:30.509 true 00:07:30.509 03:55:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:30.509 03:55:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.445 03:55:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.446 03:55:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:31.705 03:55:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:31.705 true 00:07:31.705 03:55:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:31.705 03:55:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.642 03:55:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.642 03:55:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:32.642 03:55:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:32.901 true 00:07:32.901 03:55:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:32.901 03:55:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.839 03:55:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.097 03:55:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:34.097 03:55:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:34.097 true 00:07:34.356 03:55:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:34.356 03:55:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.293 03:55:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.293 03:55:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:35.293 03:55:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:35.293 true 00:07:35.293 03:55:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:35.293 03:55:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.232 03:55:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.491 03:55:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:36.491 03:55:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:36.491 true 00:07:36.750 03:55:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:36.750 03:55:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.320 03:55:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.579 03:55:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:37.579 03:55:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:37.837 true 00:07:37.837 03:55:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:37.837 03:55:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.775 03:55:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.775 03:55:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:38.775 03:55:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:39.034 true 00:07:39.034 03:55:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:39.034 03:55:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.971 03:55:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.230 03:55:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:40.230 03:55:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:40.230 true 00:07:40.230 03:55:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:40.230 03:55:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.168 03:55:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.427 03:55:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:41.427 03:55:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:41.427 true 00:07:41.427 03:55:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:41.427 03:55:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.364 03:55:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.623 03:55:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:42.623 03:55:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:42.623 true 00:07:42.623 03:55:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:42.623 03:55:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.561 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.561 03:55:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.561 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.561 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.819 03:55:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:43.819 03:55:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:43.819 true 00:07:43.819 03:55:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:43.819 03:55:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.078 03:55:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.337 03:55:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:44.337 03:55:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:44.337 true 00:07:44.337 03:55:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:44.337 03:55:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.596 03:55:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.855 03:55:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:44.855 03:55:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:44.855 true 00:07:45.114 03:55:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:45.114 03:55:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.114 03:55:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.374 03:55:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:45.374 03:55:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:45.633 true 00:07:45.633 03:55:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:45.633 03:55:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.633 Initializing NVMe Controllers 00:07:45.633 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:45.633 Controller IO queue size 128, less than required. 00:07:45.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:45.633 Controller IO queue size 128, less than required. 00:07:45.633 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:45.633 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:45.633 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:45.633 Initialization complete. Launching workers. 00:07:45.633 ======================================================== 00:07:45.633 Latency(us) 00:07:45.633 Device Information : IOPS MiB/s Average min max 00:07:45.633 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6294.33 3.07 17759.85 738.56 1127105.68 00:07:45.633 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35880.87 17.52 3567.29 1535.67 269087.73 00:07:45.633 ======================================================== 00:07:45.633 Total : 42175.20 20.59 5685.43 738.56 1127105.68 00:07:45.633 00:07:45.633 03:55:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.892 03:55:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:45.892 03:55:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:46.151 true 00:07:46.151 03:55:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 614123 00:07:46.151 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (614123) - No such process 00:07:46.151 03:55:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 614123 00:07:46.151 03:55:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.410 03:55:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:46.410 03:55:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:46.410 03:55:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:46.410 03:55:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:46.410 03:55:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:46.410 03:55:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:46.669 null0 00:07:46.669 03:55:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:46.669 03:55:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:46.669 03:55:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:46.926 null1 00:07:46.926 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:46.926 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:46.926 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:46.926 null2 00:07:46.926 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:46.926 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:46.926 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:47.184 null3 00:07:47.184 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:47.184 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:47.184 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:47.443 null4 00:07:47.443 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:47.443 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:47.443 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:47.443 null5 00:07:47.443 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:47.443 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:47.443 03:55:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:47.702 null6 00:07:47.702 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:47.702 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:47.702 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:47.962 null7 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.962 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 620524 620525 620527 620529 620531 620533 620535 620536 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:47.963 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.222 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:48.482 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:48.482 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:48.482 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:48.482 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.482 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:48.482 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:48.482 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:48.482 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:48.741 03:55:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:49.000 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:49.000 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:49.000 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.000 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:49.000 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:49.000 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:49.000 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:49.000 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:49.000 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.000 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.000 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.001 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:49.260 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:49.260 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.260 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:49.260 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:49.260 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:49.260 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:49.260 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:49.260 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:49.519 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:49.520 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.779 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:49.779 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:49.779 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:49.779 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:49.779 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:49.779 03:55:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.779 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:50.039 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:50.039 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.039 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:50.039 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:50.039 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:50.039 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:50.039 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:50.039 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:50.299 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.559 03:55:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:50.818 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.818 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:50.818 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:50.818 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:50.818 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:50.818 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:50.818 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:50.818 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:50.818 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.818 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.818 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:51.078 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.337 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:51.597 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.597 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:51.597 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:51.597 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:51.597 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.597 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:51.597 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:51.597 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:51.597 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.597 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.856 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.856 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.856 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.856 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.857 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.857 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.857 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.857 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.857 03:55:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:51.857 rmmod nvme_rdma 00:07:51.857 rmmod nvme_fabrics 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 613579 ']' 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 613579 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 613579 ']' 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 613579 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 613579 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 613579' 00:07:51.857 killing process with pid 613579 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 613579 00:07:51.857 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 613579 00:07:52.116 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:52.116 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:52.116 00:07:52.116 real 0m46.070s 00:07:52.116 user 3m16.212s 00:07:52.116 sys 0m11.065s 00:07:52.116 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.116 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:52.116 ************************************ 00:07:52.116 END TEST nvmf_ns_hotplug_stress 00:07:52.116 ************************************ 00:07:52.116 03:55:46 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:52.116 03:55:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.116 03:55:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.116 03:55:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.116 ************************************ 00:07:52.116 START TEST nvmf_delete_subsystem 00:07:52.116 ************************************ 00:07:52.116 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:07:52.116 * Looking for test storage... 00:07:52.116 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:52.116 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:52.116 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:52.116 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:52.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.376 --rc genhtml_branch_coverage=1 00:07:52.376 --rc genhtml_function_coverage=1 00:07:52.376 --rc genhtml_legend=1 00:07:52.376 --rc geninfo_all_blocks=1 00:07:52.376 --rc geninfo_unexecuted_blocks=1 00:07:52.376 00:07:52.376 ' 00:07:52.376 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:52.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.376 --rc genhtml_branch_coverage=1 00:07:52.376 --rc genhtml_function_coverage=1 00:07:52.376 --rc genhtml_legend=1 00:07:52.376 --rc geninfo_all_blocks=1 00:07:52.376 --rc geninfo_unexecuted_blocks=1 00:07:52.376 00:07:52.376 ' 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:52.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.377 --rc genhtml_branch_coverage=1 00:07:52.377 --rc genhtml_function_coverage=1 00:07:52.377 --rc genhtml_legend=1 00:07:52.377 --rc geninfo_all_blocks=1 00:07:52.377 --rc geninfo_unexecuted_blocks=1 00:07:52.377 00:07:52.377 ' 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:52.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.377 --rc genhtml_branch_coverage=1 00:07:52.377 --rc genhtml_function_coverage=1 00:07:52.377 --rc genhtml_legend=1 00:07:52.377 --rc geninfo_all_blocks=1 00:07:52.377 --rc geninfo_unexecuted_blocks=1 00:07:52.377 00:07:52.377 ' 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:52.377 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:52.377 03:55:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:58.952 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:58.952 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:58.952 Found net devices under 0000:18:00.0: mlx_0_0 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:58.952 Found net devices under 0000:18:00.1: mlx_0_1 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:07:58.952 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:58.953 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:58.953 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:07:58.953 altname enp24s0f0np0 00:07:58.953 altname ens785f0np0 00:07:58.953 inet 192.168.100.8/24 scope global mlx_0_0 00:07:58.953 valid_lft forever preferred_lft forever 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:58.953 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:58.953 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:07:58.953 altname enp24s0f1np1 00:07:58.953 altname ens785f1np1 00:07:58.953 inet 192.168.100.9/24 scope global mlx_0_1 00:07:58.953 valid_lft forever preferred_lft forever 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:58.953 192.168.100.9' 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:58.953 192.168.100.9' 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:58.953 192.168.100.9' 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:58.953 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=624733 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 624733 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 624733 ']' 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.954 [2024-12-10 03:55:52.382261] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:07:58.954 [2024-12-10 03:55:52.382318] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.954 [2024-12-10 03:55:52.442662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:58.954 [2024-12-10 03:55:52.481007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.954 [2024-12-10 03:55:52.481041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.954 [2024-12-10 03:55:52.481048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.954 [2024-12-10 03:55:52.481054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.954 [2024-12-10 03:55:52.481059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.954 [2024-12-10 03:55:52.482088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.954 [2024-12-10 03:55:52.482091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.954 [2024-12-10 03:55:52.631848] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2225940/0x2229e30) succeed. 00:07:58.954 [2024-12-10 03:55:52.639870] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2226e90/0x226b4d0) succeed. 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.954 [2024-12-10 03:55:52.726906] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.954 NULL1 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.954 Delay0 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=624765 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:58.954 03:55:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:58.954 [2024-12-10 03:55:52.839347] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:00.861 03:55:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.861 03:55:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.861 03:55:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.798 NVMe io qpair process completion error 00:08:01.798 NVMe io qpair process completion error 00:08:01.798 NVMe io qpair process completion error 00:08:01.799 NVMe io qpair process completion error 00:08:01.799 NVMe io qpair process completion error 00:08:01.799 NVMe io qpair process completion error 00:08:01.799 03:55:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.799 03:55:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:01.799 03:55:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 624765 00:08:01.799 03:55:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:02.058 03:55:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:02.058 03:55:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 624765 00:08:02.058 03:55:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:02.626 Read completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Write completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Read completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Write completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Read completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Write completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Write completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Read completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Write completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Write completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Write completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Read completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Read completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Write completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Read completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Read completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.626 Read completed with error (sct=0, sc=8) 00:08:02.626 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 starting I/O failed: -6 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Write completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.627 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Write completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Read completed with error (sct=0, sc=8) 00:08:02.628 Initializing NVMe Controllers 00:08:02.628 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:02.628 Controller IO queue size 128, less than required. 00:08:02.628 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:02.628 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:02.628 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:02.628 Initialization complete. Launching workers. 00:08:02.628 ======================================================== 00:08:02.628 Latency(us) 00:08:02.628 Device Information : IOPS MiB/s Average min max 00:08:02.628 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.45 0.04 1594372.75 1000142.29 2978327.78 00:08:02.628 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.45 0.04 1595812.81 1000883.56 2979182.78 00:08:02.628 ======================================================== 00:08:02.628 Total : 160.89 0.08 1595092.78 1000142.29 2979182.78 00:08:02.628 00:08:02.628 03:55:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:02.628 03:55:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 624765 00:08:02.628 03:55:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:02.628 [2024-12-10 03:55:56.925139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:08:02.628 [2024-12-10 03:55:56.925181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:08:02.628 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 624765 00:08:03.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (624765) - No such process 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 624765 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 624765 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 624765 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.196 [2024-12-10 03:55:57.443103] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=625695 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:03.196 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:03.196 [2024-12-10 03:55:57.524213] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:03.903 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:03.903 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:03.903 03:55:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:04.161 03:55:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:04.161 03:55:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:04.161 03:55:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:04.728 03:55:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:04.728 03:55:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:04.728 03:55:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:05.293 03:55:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:05.293 03:55:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:05.293 03:55:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:05.861 03:55:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:05.861 03:55:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:05.861 03:55:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:06.122 03:56:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:06.122 03:56:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:06.122 03:56:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:06.687 03:56:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:06.687 03:56:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:06.687 03:56:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:07.252 03:56:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:07.252 03:56:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:07.252 03:56:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:07.818 03:56:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:07.818 03:56:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:07.818 03:56:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:08.385 03:56:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:08.385 03:56:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:08.385 03:56:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:08.644 03:56:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:08.644 03:56:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:08.644 03:56:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:09.211 03:56:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:09.211 03:56:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:09.211 03:56:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:09.779 03:56:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:09.779 03:56:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:09.779 03:56:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:10.346 03:56:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:10.346 03:56:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:10.346 03:56:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:10.346 Initializing NVMe Controllers 00:08:10.346 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:10.346 Controller IO queue size 128, less than required. 00:08:10.346 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:10.346 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:10.346 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:10.346 Initialization complete. Launching workers. 00:08:10.346 ======================================================== 00:08:10.346 Latency(us) 00:08:10.346 Device Information : IOPS MiB/s Average min max 00:08:10.346 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001282.27 1000055.59 1003884.33 00:08:10.346 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002425.61 1000058.36 1006510.02 00:08:10.346 ======================================================== 00:08:10.346 Total : 256.00 0.12 1001853.94 1000055.59 1006510.02 00:08:10.346 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 625695 00:08:10.912 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (625695) - No such process 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 625695 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:10.912 rmmod nvme_rdma 00:08:10.912 rmmod nvme_fabrics 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 624733 ']' 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 624733 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 624733 ']' 00:08:10.912 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 624733 00:08:10.913 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:10.913 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.913 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 624733 00:08:10.913 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.913 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.913 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 624733' 00:08:10.913 killing process with pid 624733 00:08:10.913 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 624733 00:08:10.913 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 624733 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:11.171 00:08:11.171 real 0m18.932s 00:08:11.171 user 0m48.571s 00:08:11.171 sys 0m5.391s 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.171 ************************************ 00:08:11.171 END TEST nvmf_delete_subsystem 00:08:11.171 ************************************ 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.171 ************************************ 00:08:11.171 START TEST nvmf_host_management 00:08:11.171 ************************************ 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:11.171 * Looking for test storage... 00:08:11.171 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:11.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.171 --rc genhtml_branch_coverage=1 00:08:11.171 --rc genhtml_function_coverage=1 00:08:11.171 --rc genhtml_legend=1 00:08:11.171 --rc geninfo_all_blocks=1 00:08:11.171 --rc geninfo_unexecuted_blocks=1 00:08:11.171 00:08:11.171 ' 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:11.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.171 --rc genhtml_branch_coverage=1 00:08:11.171 --rc genhtml_function_coverage=1 00:08:11.171 --rc genhtml_legend=1 00:08:11.171 --rc geninfo_all_blocks=1 00:08:11.171 --rc geninfo_unexecuted_blocks=1 00:08:11.171 00:08:11.171 ' 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:11.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.171 --rc genhtml_branch_coverage=1 00:08:11.171 --rc genhtml_function_coverage=1 00:08:11.171 --rc genhtml_legend=1 00:08:11.171 --rc geninfo_all_blocks=1 00:08:11.171 --rc geninfo_unexecuted_blocks=1 00:08:11.171 00:08:11.171 ' 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:11.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.171 --rc genhtml_branch_coverage=1 00:08:11.171 --rc genhtml_function_coverage=1 00:08:11.171 --rc genhtml_legend=1 00:08:11.171 --rc geninfo_all_blocks=1 00:08:11.171 --rc geninfo_unexecuted_blocks=1 00:08:11.171 00:08:11.171 ' 00:08:11.171 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.432 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:11.432 03:56:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:18.007 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:18.007 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:18.007 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:18.008 Found net devices under 0000:18:00.0: mlx_0_0 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:18.008 Found net devices under 0000:18:00.1: mlx_0_1 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:18.008 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:18.008 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:18.008 altname enp24s0f0np0 00:08:18.008 altname ens785f0np0 00:08:18.008 inet 192.168.100.8/24 scope global mlx_0_0 00:08:18.008 valid_lft forever preferred_lft forever 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:18.008 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:18.008 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:18.008 altname enp24s0f1np1 00:08:18.008 altname ens785f1np1 00:08:18.008 inet 192.168.100.9/24 scope global mlx_0_1 00:08:18.008 valid_lft forever preferred_lft forever 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:18.008 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:18.009 192.168.100.9' 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:18.009 192.168.100.9' 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:18.009 192.168.100.9' 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=630424 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 630424 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 630424 ']' 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.009 [2024-12-10 03:56:11.406697] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:18.009 [2024-12-10 03:56:11.406747] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.009 [2024-12-10 03:56:11.467236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.009 [2024-12-10 03:56:11.508917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.009 [2024-12-10 03:56:11.508954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.009 [2024-12-10 03:56:11.508961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.009 [2024-12-10 03:56:11.508967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.009 [2024-12-10 03:56:11.508971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.009 [2024-12-10 03:56:11.510189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.009 [2024-12-10 03:56:11.510289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.009 [2024-12-10 03:56:11.510318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.009 [2024-12-10 03:56:11.510319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.009 [2024-12-10 03:56:11.661582] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11633c0/0x11678b0) succeed. 00:08:18.009 [2024-12-10 03:56:11.670337] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1164a50/0x11a8f50) succeed. 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.009 Malloc0 00:08:18.009 [2024-12-10 03:56:11.848425] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=630572 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 630572 /var/tmp/bdevperf.sock 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 630572 ']' 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:18.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:18.009 { 00:08:18.009 "params": { 00:08:18.009 "name": "Nvme$subsystem", 00:08:18.009 "trtype": "$TEST_TRANSPORT", 00:08:18.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.009 "adrfam": "ipv4", 00:08:18.009 "trsvcid": "$NVMF_PORT", 00:08:18.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.009 "hdgst": ${hdgst:-false}, 00:08:18.009 "ddgst": ${ddgst:-false} 00:08:18.009 }, 00:08:18.009 "method": "bdev_nvme_attach_controller" 00:08:18.009 } 00:08:18.009 EOF 00:08:18.009 )") 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:18.009 03:56:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:18.009 "params": { 00:08:18.009 "name": "Nvme0", 00:08:18.009 "trtype": "rdma", 00:08:18.009 "traddr": "192.168.100.8", 00:08:18.009 "adrfam": "ipv4", 00:08:18.009 "trsvcid": "4420", 00:08:18.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:18.009 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:18.009 "hdgst": false, 00:08:18.009 "ddgst": false 00:08:18.009 }, 00:08:18.009 "method": "bdev_nvme_attach_controller" 00:08:18.009 }' 00:08:18.009 [2024-12-10 03:56:11.939852] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:18.010 [2024-12-10 03:56:11.939893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630572 ] 00:08:18.010 [2024-12-10 03:56:12.007759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.010 [2024-12-10 03:56:12.058870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.010 Running I/O for 10 seconds... 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=107 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 107 -ge 100 ']' 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.010 03:56:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:19.206 232.00 IOPS, 14.50 MiB/s [2024-12-10T02:56:13.595Z] [2024-12-10 03:56:13.355199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106d1c00 len:0x10000 key:0x182c00 00:08:19.207 [2024-12-10 03:56:13.355227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106c1b80 len:0x10000 key:0x182c00 00:08:19.207 [2024-12-10 03:56:13.355247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106b1b00 len:0x10000 key:0x182c00 00:08:19.207 [2024-12-10 03:56:13.355262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106a1a80 len:0x10000 key:0x182c00 00:08:19.207 [2024-12-10 03:56:13.355280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010691a00 len:0x10000 key:0x182c00 00:08:19.207 [2024-12-10 03:56:13.355299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010681980 len:0x10000 key:0x182c00 00:08:19.207 [2024-12-10 03:56:13.355313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010671900 len:0x10000 key:0x182c00 00:08:19.207 [2024-12-10 03:56:13.355326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010661880 len:0x10000 key:0x182c00 00:08:19.207 [2024-12-10 03:56:13.355340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010651800 len:0x10000 key:0x182c00 00:08:19.207 [2024-12-10 03:56:13.355353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010641780 len:0x10000 key:0x182c00 00:08:19.207 [2024-12-10 03:56:13.355367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010631700 len:0x10000 key:0x182c00 00:08:19.207 [2024-12-10 03:56:13.355380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010621680 len:0x10000 key:0x182c00 00:08:19.207 [2024-12-10 03:56:13.355393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010611600 len:0x10000 key:0x182c00 00:08:19.207 [2024-12-10 03:56:13.355406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010601580 len:0x10000 key:0x182c00 00:08:19.207 [2024-12-10 03:56:13.355420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000170cfe80 len:0x10000 key:0x181a00 00:08:19.207 [2024-12-10 03:56:13.355433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000170bfe00 len:0x10000 key:0x181a00 00:08:19.207 [2024-12-10 03:56:13.355448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000170afd80 len:0x10000 key:0x181a00 00:08:19.207 [2024-12-10 03:56:13.355461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001709fd00 len:0x10000 key:0x181a00 00:08:19.207 [2024-12-10 03:56:13.355474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001708fc80 len:0x10000 key:0x181a00 00:08:19.207 [2024-12-10 03:56:13.355488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001707fc00 len:0x10000 key:0x181a00 00:08:19.207 [2024-12-10 03:56:13.355502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001706fb80 len:0x10000 key:0x181a00 00:08:19.207 [2024-12-10 03:56:13.355516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001705fb00 len:0x10000 key:0x181a00 00:08:19.207 [2024-12-10 03:56:13.355530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001704fa80 len:0x10000 key:0x181a00 00:08:19.207 [2024-12-10 03:56:13.355544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001703fa00 len:0x10000 key:0x181a00 00:08:19.207 [2024-12-10 03:56:13.355557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001702f980 len:0x10000 key:0x181a00 00:08:19.207 [2024-12-10 03:56:13.355570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001701f900 len:0x10000 key:0x181a00 00:08:19.207 [2024-12-10 03:56:13.355584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001700f880 len:0x10000 key:0x181a00 00:08:19.207 [2024-12-10 03:56:13.355602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016eeff80 len:0x10000 key:0x181b00 00:08:19.207 [2024-12-10 03:56:13.355615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016edff00 len:0x10000 key:0x181b00 00:08:19.207 [2024-12-10 03:56:13.355629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016ecfe80 len:0x10000 key:0x181b00 00:08:19.207 [2024-12-10 03:56:13.355642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016ebfe00 len:0x10000 key:0x181b00 00:08:19.207 [2024-12-10 03:56:13.355656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016eafd80 len:0x10000 key:0x181b00 00:08:19.207 [2024-12-10 03:56:13.355669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e9fd00 len:0x10000 key:0x181b00 00:08:19.207 [2024-12-10 03:56:13.355682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e8fc80 len:0x10000 key:0x181b00 00:08:19.207 [2024-12-10 03:56:13.355697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e7fc00 len:0x10000 key:0x181b00 00:08:19.207 [2024-12-10 03:56:13.355710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e6fb80 len:0x10000 key:0x181b00 00:08:19.207 [2024-12-10 03:56:13.355723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.207 [2024-12-10 03:56:13.355731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e5fb00 len:0x10000 key:0x181b00 00:08:19.208 [2024-12-10 03:56:13.355737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e4fa80 len:0x10000 key:0x181b00 00:08:19.208 [2024-12-10 03:56:13.355751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e3fa00 len:0x10000 key:0x181b00 00:08:19.208 [2024-12-10 03:56:13.355766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e2f980 len:0x10000 key:0x181b00 00:08:19.208 [2024-12-10 03:56:13.355779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008893000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088b4000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c2f000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c0e000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bed000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bcc000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008ac4000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008aa3000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009db7000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d96000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d75000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d54000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d33000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009d12000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009cf1000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.355987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009cd0000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.355992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.356000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a0cf000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.356005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.356013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a0ae000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.356018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.356026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a08d000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.356031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.356038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a06c000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.356044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.356052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a04b000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.356058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.356066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a02a000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.356071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.356079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a009000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.356085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.356092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009fe8000 len:0x10000 key:0x182b00 00:08:19.208 [2024-12-10 03:56:13.356098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:c9081000 sqhd:7210 p:0 m:0 dnr:0 00:08:19.208 [2024-12-10 03:56:13.358754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:19.208 task offset: 32768 on job bdev=Nvme0n1 fails 00:08:19.208 00:08:19.208 Latency(us) 00:08:19.208 [2024-12-10T02:56:13.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.208 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:19.208 Job: Nvme0n1 ended in about 1.11 seconds with error 00:08:19.208 Verification LBA range: start 0x0 length 0x400 00:08:19.208 Nvme0n1 : 1.11 208.62 13.04 57.55 0.00 238912.63 2111.72 1025274.31 00:08:19.208 [2024-12-10T02:56:13.597Z] =================================================================================================================== 00:08:19.208 [2024-12-10T02:56:13.597Z] Total : 208.62 13.04 57.55 0.00 238912.63 2111.72 1025274.31 00:08:19.208 [2024-12-10 03:56:13.360338] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.208 03:56:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 630572 00:08:19.208 03:56:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:19.208 03:56:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:19.208 03:56:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:19.208 03:56:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:19.208 03:56:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:19.208 03:56:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:19.208 03:56:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:19.208 { 00:08:19.208 "params": { 00:08:19.208 "name": "Nvme$subsystem", 00:08:19.208 "trtype": "$TEST_TRANSPORT", 00:08:19.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:19.208 "adrfam": "ipv4", 00:08:19.208 "trsvcid": "$NVMF_PORT", 00:08:19.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:19.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:19.208 "hdgst": ${hdgst:-false}, 00:08:19.208 "ddgst": ${ddgst:-false} 00:08:19.208 }, 00:08:19.208 "method": "bdev_nvme_attach_controller" 00:08:19.208 } 00:08:19.208 EOF 00:08:19.208 )") 00:08:19.208 03:56:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:19.209 03:56:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:19.209 03:56:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:19.209 03:56:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:19.209 "params": { 00:08:19.209 "name": "Nvme0", 00:08:19.209 "trtype": "rdma", 00:08:19.209 "traddr": "192.168.100.8", 00:08:19.209 "adrfam": "ipv4", 00:08:19.209 "trsvcid": "4420", 00:08:19.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:19.209 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:19.209 "hdgst": false, 00:08:19.209 "ddgst": false 00:08:19.209 }, 00:08:19.209 "method": "bdev_nvme_attach_controller" 00:08:19.209 }' 00:08:19.209 [2024-12-10 03:56:13.410824] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:19.209 [2024-12-10 03:56:13.410868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630872 ] 00:08:19.209 [2024-12-10 03:56:13.469474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.209 [2024-12-10 03:56:13.507156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.468 Running I/O for 1 seconds... 00:08:20.406 3264.00 IOPS, 204.00 MiB/s 00:08:20.406 Latency(us) 00:08:20.406 [2024-12-10T02:56:14.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.406 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:20.406 Verification LBA range: start 0x0 length 0x400 00:08:20.406 Nvme0n1 : 1.00 3313.70 207.11 0.00 0.00 18935.80 543.10 30486.38 00:08:20.406 [2024-12-10T02:56:14.795Z] =================================================================================================================== 00:08:20.406 [2024-12-10T02:56:14.795Z] Total : 3313.70 207.11 0.00 0.00 18935.80 543.10 30486.38 00:08:20.665 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 630572 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:20.665 rmmod nvme_rdma 00:08:20.665 rmmod nvme_fabrics 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 630424 ']' 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 630424 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 630424 ']' 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 630424 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 630424 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 630424' 00:08:20.665 killing process with pid 630424 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 630424 00:08:20.665 03:56:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 630424 00:08:20.925 [2024-12-10 03:56:15.207969] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:20.925 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.925 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:20.925 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:20.925 00:08:20.925 real 0m9.829s 00:08:20.925 user 0m19.267s 00:08:20.925 sys 0m5.167s 00:08:20.925 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.925 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.925 ************************************ 00:08:20.925 END TEST nvmf_host_management 00:08:20.925 ************************************ 00:08:20.925 03:56:15 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:20.925 03:56:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.925 03:56:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.925 03:56:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.925 ************************************ 00:08:20.925 START TEST nvmf_lvol 00:08:20.925 ************************************ 00:08:20.925 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:21.184 * Looking for test storage... 00:08:21.184 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:21.184 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:21.184 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:21.184 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:21.184 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:21.184 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.184 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.184 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.184 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:21.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.185 --rc genhtml_branch_coverage=1 00:08:21.185 --rc genhtml_function_coverage=1 00:08:21.185 --rc genhtml_legend=1 00:08:21.185 --rc geninfo_all_blocks=1 00:08:21.185 --rc geninfo_unexecuted_blocks=1 00:08:21.185 00:08:21.185 ' 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:21.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.185 --rc genhtml_branch_coverage=1 00:08:21.185 --rc genhtml_function_coverage=1 00:08:21.185 --rc genhtml_legend=1 00:08:21.185 --rc geninfo_all_blocks=1 00:08:21.185 --rc geninfo_unexecuted_blocks=1 00:08:21.185 00:08:21.185 ' 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:21.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.185 --rc genhtml_branch_coverage=1 00:08:21.185 --rc genhtml_function_coverage=1 00:08:21.185 --rc genhtml_legend=1 00:08:21.185 --rc geninfo_all_blocks=1 00:08:21.185 --rc geninfo_unexecuted_blocks=1 00:08:21.185 00:08:21.185 ' 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:21.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.185 --rc genhtml_branch_coverage=1 00:08:21.185 --rc genhtml_function_coverage=1 00:08:21.185 --rc genhtml_legend=1 00:08:21.185 --rc geninfo_all_blocks=1 00:08:21.185 --rc geninfo_unexecuted_blocks=1 00:08:21.185 00:08:21.185 ' 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:21.185 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:21.185 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:21.186 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:21.186 03:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.459 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:26.720 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:26.720 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:26.720 Found net devices under 0000:18:00.0: mlx_0_0 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:26.720 Found net devices under 0000:18:00.1: mlx_0_1 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:26.720 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:26.721 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:26.721 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:26.721 altname enp24s0f0np0 00:08:26.721 altname ens785f0np0 00:08:26.721 inet 192.168.100.8/24 scope global mlx_0_0 00:08:26.721 valid_lft forever preferred_lft forever 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:26.721 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:26.721 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:26.721 altname enp24s0f1np1 00:08:26.721 altname ens785f1np1 00:08:26.721 inet 192.168.100.9/24 scope global mlx_0_1 00:08:26.721 valid_lft forever preferred_lft forever 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:26.721 03:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:26.721 192.168.100.9' 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:26.721 192.168.100.9' 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:26.721 192.168.100.9' 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=634525 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 634525 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 634525 ']' 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.721 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:26.980 [2024-12-10 03:56:21.104563] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:26.980 [2024-12-10 03:56:21.104612] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.980 [2024-12-10 03:56:21.164578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:26.980 [2024-12-10 03:56:21.203648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.980 [2024-12-10 03:56:21.203683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.980 [2024-12-10 03:56:21.203690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.980 [2024-12-10 03:56:21.203696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.980 [2024-12-10 03:56:21.203700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.980 [2024-12-10 03:56:21.204968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.980 [2024-12-10 03:56:21.205049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.980 [2024-12-10 03:56:21.205051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.980 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.980 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:26.980 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:26.980 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:26.980 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:26.980 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.980 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:27.239 [2024-12-10 03:56:21.504293] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcd2500/0xcd69f0) succeed. 00:08:27.239 [2024-12-10 03:56:21.512361] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcd3af0/0xd18090) succeed. 00:08:27.239 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:27.498 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:27.498 03:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:27.757 03:56:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:27.757 03:56:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:28.016 03:56:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:28.016 03:56:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1e3dcbb3-63be-4b9e-ade5-47b2aa69e539 00:08:28.016 03:56:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1e3dcbb3-63be-4b9e-ade5-47b2aa69e539 lvol 20 00:08:28.275 03:56:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=62849d59-9163-4e2b-9a15-9f76c9852b08 00:08:28.275 03:56:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:28.534 03:56:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 62849d59-9163-4e2b-9a15-9f76c9852b08 00:08:28.793 03:56:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:28.793 [2024-12-10 03:56:23.083991] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:28.793 03:56:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:29.052 03:56:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=634853 00:08:29.052 03:56:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:29.052 03:56:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:29.988 03:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 62849d59-9163-4e2b-9a15-9f76c9852b08 MY_SNAPSHOT 00:08:30.248 03:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6486a0ff-147b-4797-9b4e-8b5cff062e39 00:08:30.248 03:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 62849d59-9163-4e2b-9a15-9f76c9852b08 30 00:08:30.507 03:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6486a0ff-147b-4797-9b4e-8b5cff062e39 MY_CLONE 00:08:30.507 03:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ecab97ff-8c0b-4da5-84cb-c566e8a31cbc 00:08:30.507 03:56:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ecab97ff-8c0b-4da5-84cb-c566e8a31cbc 00:08:30.766 03:56:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 634853 00:08:40.746 Initializing NVMe Controllers 00:08:40.746 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:40.746 Controller IO queue size 128, less than required. 00:08:40.746 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:40.746 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:40.746 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:40.746 Initialization complete. Launching workers. 00:08:40.746 ======================================================== 00:08:40.746 Latency(us) 00:08:40.746 Device Information : IOPS MiB/s Average min max 00:08:40.746 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17395.90 67.95 7359.59 2087.17 47432.91 00:08:40.746 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17298.70 67.57 7400.78 2791.70 44030.83 00:08:40.746 ======================================================== 00:08:40.746 Total : 34694.60 135.53 7380.13 2087.17 47432.91 00:08:40.746 00:08:40.746 03:56:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.746 03:56:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 62849d59-9163-4e2b-9a15-9f76c9852b08 00:08:40.746 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1e3dcbb3-63be-4b9e-ade5-47b2aa69e539 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:41.005 rmmod nvme_rdma 00:08:41.005 rmmod nvme_fabrics 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 634525 ']' 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 634525 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 634525 ']' 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 634525 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 634525 00:08:41.005 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.006 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.006 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 634525' 00:08:41.006 killing process with pid 634525 00:08:41.006 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 634525 00:08:41.006 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 634525 00:08:41.264 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:41.264 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:41.264 00:08:41.264 real 0m20.314s 00:08:41.264 user 1m9.317s 00:08:41.264 sys 0m5.295s 00:08:41.265 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.265 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:41.265 ************************************ 00:08:41.265 END TEST nvmf_lvol 00:08:41.265 ************************************ 00:08:41.265 03:56:35 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:41.265 03:56:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.265 03:56:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.265 03:56:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.524 ************************************ 00:08:41.524 START TEST nvmf_lvs_grow 00:08:41.524 ************************************ 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:41.524 * Looking for test storage... 00:08:41.524 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.524 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:41.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.525 --rc genhtml_branch_coverage=1 00:08:41.525 --rc genhtml_function_coverage=1 00:08:41.525 --rc genhtml_legend=1 00:08:41.525 --rc geninfo_all_blocks=1 00:08:41.525 --rc geninfo_unexecuted_blocks=1 00:08:41.525 00:08:41.525 ' 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:41.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.525 --rc genhtml_branch_coverage=1 00:08:41.525 --rc genhtml_function_coverage=1 00:08:41.525 --rc genhtml_legend=1 00:08:41.525 --rc geninfo_all_blocks=1 00:08:41.525 --rc geninfo_unexecuted_blocks=1 00:08:41.525 00:08:41.525 ' 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:41.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.525 --rc genhtml_branch_coverage=1 00:08:41.525 --rc genhtml_function_coverage=1 00:08:41.525 --rc genhtml_legend=1 00:08:41.525 --rc geninfo_all_blocks=1 00:08:41.525 --rc geninfo_unexecuted_blocks=1 00:08:41.525 00:08:41.525 ' 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:41.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.525 --rc genhtml_branch_coverage=1 00:08:41.525 --rc genhtml_function_coverage=1 00:08:41.525 --rc genhtml_legend=1 00:08:41.525 --rc geninfo_all_blocks=1 00:08:41.525 --rc geninfo_unexecuted_blocks=1 00:08:41.525 00:08:41.525 ' 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.525 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:41.525 03:56:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:48.096 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:48.096 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:48.096 Found net devices under 0000:18:00.0: mlx_0_0 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:48.096 Found net devices under 0000:18:00.1: mlx_0_1 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:48.096 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:48.097 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:48.097 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:48.097 altname enp24s0f0np0 00:08:48.097 altname ens785f0np0 00:08:48.097 inet 192.168.100.8/24 scope global mlx_0_0 00:08:48.097 valid_lft forever preferred_lft forever 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:48.097 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:48.097 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:48.097 altname enp24s0f1np1 00:08:48.097 altname ens785f1np1 00:08:48.097 inet 192.168.100.9/24 scope global mlx_0_1 00:08:48.097 valid_lft forever preferred_lft forever 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:48.097 192.168.100.9' 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:48.097 192.168.100.9' 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:48.097 192.168.100.9' 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=640462 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 640462 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 640462 ']' 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.097 [2024-12-10 03:56:41.704202] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:48.097 [2024-12-10 03:56:41.704255] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.097 [2024-12-10 03:56:41.764128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.097 [2024-12-10 03:56:41.803923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.097 [2024-12-10 03:56:41.803954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.097 [2024-12-10 03:56:41.803961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.097 [2024-12-10 03:56:41.803967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.097 [2024-12-10 03:56:41.803971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.097 [2024-12-10 03:56:41.804435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.097 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.098 03:56:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:48.098 [2024-12-10 03:56:42.106000] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1690dc0/0x16952b0) succeed. 00:08:48.098 [2024-12-10 03:56:42.113994] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1692270/0x16d6950) succeed. 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.098 ************************************ 00:08:48.098 START TEST lvs_grow_clean 00:08:48.098 ************************************ 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:48.098 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:48.357 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cb54add4-ab07-4f40-ba61-5b47c8b5248a 00:08:48.357 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb54add4-ab07-4f40-ba61-5b47c8b5248a 00:08:48.357 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:48.616 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:48.616 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:48.616 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cb54add4-ab07-4f40-ba61-5b47c8b5248a lvol 150 00:08:48.616 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=87d09384-8891-4aa5-976d-7acb67444f9a 00:08:48.616 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.616 03:56:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:48.874 [2024-12-10 03:56:43.085441] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:48.874 [2024-12-10 03:56:43.085485] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:48.874 true 00:08:48.874 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:48.874 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb54add4-ab07-4f40-ba61-5b47c8b5248a 00:08:49.133 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:49.133 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:49.133 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 87d09384-8891-4aa5-976d-7acb67444f9a 00:08:49.392 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:49.392 [2024-12-10 03:56:43.739550] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:49.392 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:49.651 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=641020 00:08:49.651 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:49.651 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:49.651 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 641020 /var/tmp/bdevperf.sock 00:08:49.651 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 641020 ']' 00:08:49.651 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:49.651 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.651 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:49.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:49.651 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.651 03:56:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:49.651 [2024-12-10 03:56:43.952029] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:08:49.651 [2024-12-10 03:56:43.952074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid641020 ] 00:08:49.651 [2024-12-10 03:56:44.008326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.910 [2024-12-10 03:56:44.046179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.910 03:56:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.910 03:56:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:49.910 03:56:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:50.169 Nvme0n1 00:08:50.169 03:56:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:50.428 [ 00:08:50.428 { 00:08:50.428 "name": "Nvme0n1", 00:08:50.428 "aliases": [ 00:08:50.428 "87d09384-8891-4aa5-976d-7acb67444f9a" 00:08:50.428 ], 00:08:50.428 "product_name": "NVMe disk", 00:08:50.428 "block_size": 4096, 00:08:50.428 "num_blocks": 38912, 00:08:50.428 "uuid": "87d09384-8891-4aa5-976d-7acb67444f9a", 00:08:50.428 "numa_id": 0, 00:08:50.428 "assigned_rate_limits": { 00:08:50.428 "rw_ios_per_sec": 0, 00:08:50.428 "rw_mbytes_per_sec": 0, 00:08:50.428 "r_mbytes_per_sec": 0, 00:08:50.428 "w_mbytes_per_sec": 0 00:08:50.428 }, 00:08:50.428 "claimed": false, 00:08:50.428 "zoned": false, 00:08:50.428 "supported_io_types": { 00:08:50.428 "read": true, 00:08:50.428 "write": true, 00:08:50.428 "unmap": true, 00:08:50.428 "flush": true, 00:08:50.428 "reset": true, 00:08:50.428 "nvme_admin": true, 00:08:50.428 "nvme_io": true, 00:08:50.428 "nvme_io_md": false, 00:08:50.428 "write_zeroes": true, 00:08:50.428 "zcopy": false, 00:08:50.428 "get_zone_info": false, 00:08:50.428 "zone_management": false, 00:08:50.428 "zone_append": false, 00:08:50.428 "compare": true, 00:08:50.428 "compare_and_write": true, 00:08:50.428 "abort": true, 00:08:50.428 "seek_hole": false, 00:08:50.428 "seek_data": false, 00:08:50.428 "copy": true, 00:08:50.428 "nvme_iov_md": false 00:08:50.428 }, 00:08:50.428 "memory_domains": [ 00:08:50.428 { 00:08:50.428 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:50.428 "dma_device_type": 0 00:08:50.428 } 00:08:50.428 ], 00:08:50.428 "driver_specific": { 00:08:50.428 "nvme": [ 00:08:50.428 { 00:08:50.428 "trid": { 00:08:50.428 "trtype": "RDMA", 00:08:50.428 "adrfam": "IPv4", 00:08:50.428 "traddr": "192.168.100.8", 00:08:50.428 "trsvcid": "4420", 00:08:50.428 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:50.428 }, 00:08:50.428 "ctrlr_data": { 00:08:50.428 "cntlid": 1, 00:08:50.428 "vendor_id": "0x8086", 00:08:50.428 "model_number": "SPDK bdev Controller", 00:08:50.428 "serial_number": "SPDK0", 00:08:50.428 "firmware_revision": "25.01", 00:08:50.428 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:50.428 "oacs": { 00:08:50.428 "security": 0, 00:08:50.428 "format": 0, 00:08:50.428 "firmware": 0, 00:08:50.428 "ns_manage": 0 00:08:50.428 }, 00:08:50.428 "multi_ctrlr": true, 00:08:50.428 "ana_reporting": false 00:08:50.428 }, 00:08:50.428 "vs": { 00:08:50.428 "nvme_version": "1.3" 00:08:50.428 }, 00:08:50.428 "ns_data": { 00:08:50.428 "id": 1, 00:08:50.428 "can_share": true 00:08:50.428 } 00:08:50.428 } 00:08:50.428 ], 00:08:50.428 "mp_policy": "active_passive" 00:08:50.428 } 00:08:50.428 } 00:08:50.428 ] 00:08:50.428 03:56:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=641036 00:08:50.428 03:56:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:50.428 03:56:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:50.428 Running I/O for 10 seconds... 00:08:51.366 Latency(us) 00:08:51.366 [2024-12-10T02:56:45.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.366 Nvme0n1 : 1.00 37120.00 145.00 0.00 0.00 0.00 0.00 0.00 00:08:51.366 [2024-12-10T02:56:45.755Z] =================================================================================================================== 00:08:51.366 [2024-12-10T02:56:45.755Z] Total : 37120.00 145.00 0.00 0.00 0.00 0.00 0.00 00:08:51.366 00:08:52.304 03:56:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cb54add4-ab07-4f40-ba61-5b47c8b5248a 00:08:52.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.304 Nvme0n1 : 2.00 37408.00 146.12 0.00 0.00 0.00 0.00 0.00 00:08:52.304 [2024-12-10T02:56:46.693Z] =================================================================================================================== 00:08:52.304 [2024-12-10T02:56:46.693Z] Total : 37408.00 146.12 0.00 0.00 0.00 0.00 0.00 00:08:52.304 00:08:52.563 true 00:08:52.563 03:56:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb54add4-ab07-4f40-ba61-5b47c8b5248a 00:08:52.563 03:56:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:52.563 03:56:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:52.563 03:56:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:52.563 03:56:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 641036 00:08:53.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.500 Nvme0n1 : 3.00 37525.00 146.58 0.00 0.00 0.00 0.00 0.00 00:08:53.500 [2024-12-10T02:56:47.889Z] =================================================================================================================== 00:08:53.500 [2024-12-10T02:56:47.889Z] Total : 37525.00 146.58 0.00 0.00 0.00 0.00 0.00 00:08:53.500 00:08:54.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.437 Nvme0n1 : 4.00 37632.00 147.00 0.00 0.00 0.00 0.00 0.00 00:08:54.437 [2024-12-10T02:56:48.826Z] =================================================================================================================== 00:08:54.437 [2024-12-10T02:56:48.826Z] Total : 37632.00 147.00 0.00 0.00 0.00 0.00 0.00 00:08:54.437 00:08:55.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.375 Nvme0n1 : 5.00 37696.60 147.25 0.00 0.00 0.00 0.00 0.00 00:08:55.375 [2024-12-10T02:56:49.764Z] =================================================================================================================== 00:08:55.375 [2024-12-10T02:56:49.764Z] Total : 37696.60 147.25 0.00 0.00 0.00 0.00 0.00 00:08:55.375 00:08:56.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.313 Nvme0n1 : 6.00 37680.00 147.19 0.00 0.00 0.00 0.00 0.00 00:08:56.313 [2024-12-10T02:56:50.702Z] =================================================================================================================== 00:08:56.313 [2024-12-10T02:56:50.702Z] Total : 37680.00 147.19 0.00 0.00 0.00 0.00 0.00 00:08:56.313 00:08:57.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.691 Nvme0n1 : 7.00 37723.57 147.36 0.00 0.00 0.00 0.00 0.00 00:08:57.691 [2024-12-10T02:56:52.080Z] =================================================================================================================== 00:08:57.691 [2024-12-10T02:56:52.080Z] Total : 37723.57 147.36 0.00 0.00 0.00 0.00 0.00 00:08:57.691 00:08:58.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.628 Nvme0n1 : 8.00 37760.12 147.50 0.00 0.00 0.00 0.00 0.00 00:08:58.628 [2024-12-10T02:56:53.017Z] =================================================================================================================== 00:08:58.628 [2024-12-10T02:56:53.017Z] Total : 37760.12 147.50 0.00 0.00 0.00 0.00 0.00 00:08:58.628 00:08:59.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.565 Nvme0n1 : 9.00 37788.33 147.61 0.00 0.00 0.00 0.00 0.00 00:08:59.565 [2024-12-10T02:56:53.954Z] =================================================================================================================== 00:08:59.565 [2024-12-10T02:56:53.954Z] Total : 37788.33 147.61 0.00 0.00 0.00 0.00 0.00 00:08:59.565 00:09:00.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.576 Nvme0n1 : 10.00 37814.40 147.71 0.00 0.00 0.00 0.00 0.00 00:09:00.576 [2024-12-10T02:56:54.965Z] =================================================================================================================== 00:09:00.576 [2024-12-10T02:56:54.965Z] Total : 37814.40 147.71 0.00 0.00 0.00 0.00 0.00 00:09:00.576 00:09:00.576 00:09:00.576 Latency(us) 00:09:00.576 [2024-12-10T02:56:54.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.576 Nvme0n1 : 10.00 37812.22 147.70 0.00 0.00 3382.37 2075.31 10340.12 00:09:00.576 [2024-12-10T02:56:54.965Z] =================================================================================================================== 00:09:00.576 [2024-12-10T02:56:54.965Z] Total : 37812.22 147.70 0.00 0.00 3382.37 2075.31 10340.12 00:09:00.576 { 00:09:00.576 "results": [ 00:09:00.576 { 00:09:00.576 "job": "Nvme0n1", 00:09:00.576 "core_mask": "0x2", 00:09:00.576 "workload": "randwrite", 00:09:00.576 "status": "finished", 00:09:00.576 "queue_depth": 128, 00:09:00.576 "io_size": 4096, 00:09:00.576 "runtime": 10.003089, 00:09:00.576 "iops": 37812.21980530214, 00:09:00.576 "mibps": 147.7039836144615, 00:09:00.576 "io_failed": 0, 00:09:00.576 "io_timeout": 0, 00:09:00.576 "avg_latency_us": 3382.3734790926333, 00:09:00.576 "min_latency_us": 2075.306666666667, 00:09:00.576 "max_latency_us": 10340.124444444444 00:09:00.576 } 00:09:00.576 ], 00:09:00.576 "core_count": 1 00:09:00.576 } 00:09:00.576 03:56:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 641020 00:09:00.576 03:56:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 641020 ']' 00:09:00.576 03:56:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 641020 00:09:00.576 03:56:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:00.576 03:56:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.576 03:56:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 641020 00:09:00.576 03:56:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:00.576 03:56:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:00.576 03:56:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 641020' 00:09:00.576 killing process with pid 641020 00:09:00.576 03:56:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 641020 00:09:00.576 Received shutdown signal, test time was about 10.000000 seconds 00:09:00.576 00:09:00.576 Latency(us) 00:09:00.576 [2024-12-10T02:56:54.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.576 [2024-12-10T02:56:54.965Z] =================================================================================================================== 00:09:00.576 [2024-12-10T02:56:54.965Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:00.576 03:56:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 641020 00:09:00.888 03:56:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:00.888 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:01.146 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:01.146 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb54add4-ab07-4f40-ba61-5b47c8b5248a 00:09:01.146 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:01.146 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:01.146 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:01.405 [2024-12-10 03:56:55.655859] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:01.405 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb54add4-ab07-4f40-ba61-5b47c8b5248a 00:09:01.405 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:01.405 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb54add4-ab07-4f40-ba61-5b47c8b5248a 00:09:01.405 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:01.405 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.405 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:01.405 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.405 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:01.405 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.405 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:01.405 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:01.406 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb54add4-ab07-4f40-ba61-5b47c8b5248a 00:09:01.665 request: 00:09:01.665 { 00:09:01.665 "uuid": "cb54add4-ab07-4f40-ba61-5b47c8b5248a", 00:09:01.665 "method": "bdev_lvol_get_lvstores", 00:09:01.665 "req_id": 1 00:09:01.665 } 00:09:01.665 Got JSON-RPC error response 00:09:01.665 response: 00:09:01.665 { 00:09:01.665 "code": -19, 00:09:01.665 "message": "No such device" 00:09:01.665 } 00:09:01.665 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:01.665 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:01.665 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:01.665 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:01.665 03:56:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:01.665 aio_bdev 00:09:01.665 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 87d09384-8891-4aa5-976d-7acb67444f9a 00:09:01.665 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=87d09384-8891-4aa5-976d-7acb67444f9a 00:09:01.665 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.665 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:01.665 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.665 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.665 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:01.924 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 87d09384-8891-4aa5-976d-7acb67444f9a -t 2000 00:09:02.183 [ 00:09:02.183 { 00:09:02.183 "name": "87d09384-8891-4aa5-976d-7acb67444f9a", 00:09:02.183 "aliases": [ 00:09:02.183 "lvs/lvol" 00:09:02.183 ], 00:09:02.183 "product_name": "Logical Volume", 00:09:02.183 "block_size": 4096, 00:09:02.183 "num_blocks": 38912, 00:09:02.183 "uuid": "87d09384-8891-4aa5-976d-7acb67444f9a", 00:09:02.183 "assigned_rate_limits": { 00:09:02.183 "rw_ios_per_sec": 0, 00:09:02.183 "rw_mbytes_per_sec": 0, 00:09:02.183 "r_mbytes_per_sec": 0, 00:09:02.183 "w_mbytes_per_sec": 0 00:09:02.183 }, 00:09:02.183 "claimed": false, 00:09:02.183 "zoned": false, 00:09:02.183 "supported_io_types": { 00:09:02.183 "read": true, 00:09:02.183 "write": true, 00:09:02.183 "unmap": true, 00:09:02.183 "flush": false, 00:09:02.183 "reset": true, 00:09:02.183 "nvme_admin": false, 00:09:02.183 "nvme_io": false, 00:09:02.183 "nvme_io_md": false, 00:09:02.183 "write_zeroes": true, 00:09:02.183 "zcopy": false, 00:09:02.183 "get_zone_info": false, 00:09:02.183 "zone_management": false, 00:09:02.183 "zone_append": false, 00:09:02.183 "compare": false, 00:09:02.183 "compare_and_write": false, 00:09:02.183 "abort": false, 00:09:02.183 "seek_hole": true, 00:09:02.183 "seek_data": true, 00:09:02.183 "copy": false, 00:09:02.183 "nvme_iov_md": false 00:09:02.183 }, 00:09:02.183 "driver_specific": { 00:09:02.183 "lvol": { 00:09:02.183 "lvol_store_uuid": "cb54add4-ab07-4f40-ba61-5b47c8b5248a", 00:09:02.183 "base_bdev": "aio_bdev", 00:09:02.183 "thin_provision": false, 00:09:02.183 "num_allocated_clusters": 38, 00:09:02.183 "snapshot": false, 00:09:02.183 "clone": false, 00:09:02.183 "esnap_clone": false 00:09:02.183 } 00:09:02.183 } 00:09:02.183 } 00:09:02.184 ] 00:09:02.184 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:02.184 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:02.184 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb54add4-ab07-4f40-ba61-5b47c8b5248a 00:09:02.184 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:02.184 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb54add4-ab07-4f40-ba61-5b47c8b5248a 00:09:02.184 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:02.443 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:02.443 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 87d09384-8891-4aa5-976d-7acb67444f9a 00:09:02.703 03:56:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cb54add4-ab07-4f40-ba61-5b47c8b5248a 00:09:02.703 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:02.963 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:02.963 00:09:02.963 real 0m15.108s 00:09:02.963 user 0m15.033s 00:09:02.963 sys 0m0.944s 00:09:02.963 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.963 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:02.963 ************************************ 00:09:02.963 END TEST lvs_grow_clean 00:09:02.963 ************************************ 00:09:02.963 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:02.963 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:02.963 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.963 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:03.222 ************************************ 00:09:03.222 START TEST lvs_grow_dirty 00:09:03.222 ************************************ 00:09:03.222 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:03.222 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:03.222 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:03.222 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:03.222 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:03.222 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:03.222 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:03.222 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:03.222 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:03.222 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:03.222 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:03.222 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:03.482 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bec5d9fb-5f19-4f03-bcc1-194a1425b454 00:09:03.482 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bec5d9fb-5f19-4f03-bcc1-194a1425b454 00:09:03.482 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:03.741 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:03.741 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:03.741 03:56:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bec5d9fb-5f19-4f03-bcc1-194a1425b454 lvol 150 00:09:03.741 03:56:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8e59f083-a760-4247-909d-3ca769c785a5 00:09:03.741 03:56:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:03.741 03:56:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:04.000 [2024-12-10 03:56:58.253997] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:04.000 [2024-12-10 03:56:58.254043] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:04.000 true 00:09:04.000 03:56:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:04.000 03:56:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bec5d9fb-5f19-4f03-bcc1-194a1425b454 00:09:04.259 03:56:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:04.259 03:56:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:04.259 03:56:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8e59f083-a760-4247-909d-3ca769c785a5 00:09:04.517 03:56:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:04.777 [2024-12-10 03:56:58.908076] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:04.777 03:56:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:04.777 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=643732 00:09:04.777 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:04.777 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:04.777 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 643732 /var/tmp/bdevperf.sock 00:09:04.777 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 643732 ']' 00:09:04.777 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:04.777 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.777 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:04.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:04.777 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.777 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:04.777 [2024-12-10 03:56:59.103052] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:04.777 [2024-12-10 03:56:59.103099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid643732 ] 00:09:04.777 [2024-12-10 03:56:59.160110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.036 [2024-12-10 03:56:59.200497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.036 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.036 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:05.036 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:05.295 Nvme0n1 00:09:05.295 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:05.554 [ 00:09:05.554 { 00:09:05.554 "name": "Nvme0n1", 00:09:05.554 "aliases": [ 00:09:05.554 "8e59f083-a760-4247-909d-3ca769c785a5" 00:09:05.554 ], 00:09:05.554 "product_name": "NVMe disk", 00:09:05.554 "block_size": 4096, 00:09:05.554 "num_blocks": 38912, 00:09:05.554 "uuid": "8e59f083-a760-4247-909d-3ca769c785a5", 00:09:05.554 "numa_id": 0, 00:09:05.554 "assigned_rate_limits": { 00:09:05.554 "rw_ios_per_sec": 0, 00:09:05.554 "rw_mbytes_per_sec": 0, 00:09:05.554 "r_mbytes_per_sec": 0, 00:09:05.554 "w_mbytes_per_sec": 0 00:09:05.554 }, 00:09:05.554 "claimed": false, 00:09:05.554 "zoned": false, 00:09:05.554 "supported_io_types": { 00:09:05.554 "read": true, 00:09:05.554 "write": true, 00:09:05.554 "unmap": true, 00:09:05.554 "flush": true, 00:09:05.554 "reset": true, 00:09:05.554 "nvme_admin": true, 00:09:05.554 "nvme_io": true, 00:09:05.554 "nvme_io_md": false, 00:09:05.554 "write_zeroes": true, 00:09:05.554 "zcopy": false, 00:09:05.554 "get_zone_info": false, 00:09:05.554 "zone_management": false, 00:09:05.554 "zone_append": false, 00:09:05.554 "compare": true, 00:09:05.554 "compare_and_write": true, 00:09:05.554 "abort": true, 00:09:05.554 "seek_hole": false, 00:09:05.554 "seek_data": false, 00:09:05.554 "copy": true, 00:09:05.554 "nvme_iov_md": false 00:09:05.554 }, 00:09:05.554 "memory_domains": [ 00:09:05.554 { 00:09:05.554 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:05.554 "dma_device_type": 0 00:09:05.554 } 00:09:05.554 ], 00:09:05.554 "driver_specific": { 00:09:05.554 "nvme": [ 00:09:05.554 { 00:09:05.554 "trid": { 00:09:05.554 "trtype": "RDMA", 00:09:05.554 "adrfam": "IPv4", 00:09:05.554 "traddr": "192.168.100.8", 00:09:05.554 "trsvcid": "4420", 00:09:05.554 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:05.554 }, 00:09:05.554 "ctrlr_data": { 00:09:05.554 "cntlid": 1, 00:09:05.554 "vendor_id": "0x8086", 00:09:05.554 "model_number": "SPDK bdev Controller", 00:09:05.554 "serial_number": "SPDK0", 00:09:05.554 "firmware_revision": "25.01", 00:09:05.554 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:05.554 "oacs": { 00:09:05.554 "security": 0, 00:09:05.555 "format": 0, 00:09:05.555 "firmware": 0, 00:09:05.555 "ns_manage": 0 00:09:05.555 }, 00:09:05.555 "multi_ctrlr": true, 00:09:05.555 "ana_reporting": false 00:09:05.555 }, 00:09:05.555 "vs": { 00:09:05.555 "nvme_version": "1.3" 00:09:05.555 }, 00:09:05.555 "ns_data": { 00:09:05.555 "id": 1, 00:09:05.555 "can_share": true 00:09:05.555 } 00:09:05.555 } 00:09:05.555 ], 00:09:05.555 "mp_policy": "active_passive" 00:09:05.555 } 00:09:05.555 } 00:09:05.555 ] 00:09:05.555 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=643994 00:09:05.555 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:05.555 03:56:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:05.555 Running I/O for 10 seconds... 00:09:06.492 Latency(us) 00:09:06.492 [2024-12-10T02:57:00.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.492 Nvme0n1 : 1.00 36258.00 141.63 0.00 0.00 0.00 0.00 0.00 00:09:06.492 [2024-12-10T02:57:00.881Z] =================================================================================================================== 00:09:06.492 [2024-12-10T02:57:00.881Z] Total : 36258.00 141.63 0.00 0.00 0.00 0.00 0.00 00:09:06.492 00:09:07.429 03:57:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bec5d9fb-5f19-4f03-bcc1-194a1425b454 00:09:07.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.688 Nvme0n1 : 2.00 36849.00 143.94 0.00 0.00 0.00 0.00 0.00 00:09:07.688 [2024-12-10T02:57:02.077Z] =================================================================================================================== 00:09:07.688 [2024-12-10T02:57:02.077Z] Total : 36849.00 143.94 0.00 0.00 0.00 0.00 0.00 00:09:07.688 00:09:07.688 true 00:09:07.688 03:57:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bec5d9fb-5f19-4f03-bcc1-194a1425b454 00:09:07.688 03:57:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:07.947 03:57:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:07.947 03:57:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:07.947 03:57:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 643994 00:09:08.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.515 Nvme0n1 : 3.00 37066.67 144.79 0.00 0.00 0.00 0.00 0.00 00:09:08.515 [2024-12-10T02:57:02.904Z] =================================================================================================================== 00:09:08.515 [2024-12-10T02:57:02.904Z] Total : 37066.67 144.79 0.00 0.00 0.00 0.00 0.00 00:09:08.515 00:09:09.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.451 Nvme0n1 : 4.00 37280.75 145.63 0.00 0.00 0.00 0.00 0.00 00:09:09.451 [2024-12-10T02:57:03.840Z] =================================================================================================================== 00:09:09.451 [2024-12-10T02:57:03.840Z] Total : 37280.75 145.63 0.00 0.00 0.00 0.00 0.00 00:09:09.451 00:09:10.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.828 Nvme0n1 : 5.00 37414.20 146.15 0.00 0.00 0.00 0.00 0.00 00:09:10.828 [2024-12-10T02:57:05.217Z] =================================================================================================================== 00:09:10.828 [2024-12-10T02:57:05.217Z] Total : 37414.20 146.15 0.00 0.00 0.00 0.00 0.00 00:09:10.828 00:09:11.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.763 Nvme0n1 : 6.00 37455.83 146.31 0.00 0.00 0.00 0.00 0.00 00:09:11.763 [2024-12-10T02:57:06.152Z] =================================================================================================================== 00:09:11.763 [2024-12-10T02:57:06.152Z] Total : 37455.83 146.31 0.00 0.00 0.00 0.00 0.00 00:09:11.763 00:09:12.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.699 Nvme0n1 : 7.00 37526.71 146.59 0.00 0.00 0.00 0.00 0.00 00:09:12.699 [2024-12-10T02:57:07.088Z] =================================================================================================================== 00:09:12.699 [2024-12-10T02:57:07.088Z] Total : 37526.71 146.59 0.00 0.00 0.00 0.00 0.00 00:09:12.699 00:09:13.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.636 Nvme0n1 : 8.00 37584.12 146.81 0.00 0.00 0.00 0.00 0.00 00:09:13.636 [2024-12-10T02:57:08.025Z] =================================================================================================================== 00:09:13.636 [2024-12-10T02:57:08.025Z] Total : 37584.12 146.81 0.00 0.00 0.00 0.00 0.00 00:09:13.636 00:09:14.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.572 Nvme0n1 : 9.00 37635.44 147.01 0.00 0.00 0.00 0.00 0.00 00:09:14.572 [2024-12-10T02:57:08.961Z] =================================================================================================================== 00:09:14.572 [2024-12-10T02:57:08.961Z] Total : 37635.44 147.01 0.00 0.00 0.00 0.00 0.00 00:09:14.572 00:09:15.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.512 Nvme0n1 : 10.00 37673.40 147.16 0.00 0.00 0.00 0.00 0.00 00:09:15.512 [2024-12-10T02:57:09.901Z] =================================================================================================================== 00:09:15.512 [2024-12-10T02:57:09.901Z] Total : 37673.40 147.16 0.00 0.00 0.00 0.00 0.00 00:09:15.512 00:09:15.512 00:09:15.512 Latency(us) 00:09:15.512 [2024-12-10T02:57:09.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.512 Nvme0n1 : 10.00 37673.85 147.16 0.00 0.00 3394.85 2512.21 8107.05 00:09:15.512 [2024-12-10T02:57:09.901Z] =================================================================================================================== 00:09:15.512 [2024-12-10T02:57:09.901Z] Total : 37673.85 147.16 0.00 0.00 3394.85 2512.21 8107.05 00:09:15.512 { 00:09:15.512 "results": [ 00:09:15.512 { 00:09:15.512 "job": "Nvme0n1", 00:09:15.512 "core_mask": "0x2", 00:09:15.512 "workload": "randwrite", 00:09:15.512 "status": "finished", 00:09:15.512 "queue_depth": 128, 00:09:15.512 "io_size": 4096, 00:09:15.512 "runtime": 10.003277, 00:09:15.512 "iops": 37673.85427795311, 00:09:15.512 "mibps": 147.16349327325435, 00:09:15.512 "io_failed": 0, 00:09:15.512 "io_timeout": 0, 00:09:15.512 "avg_latency_us": 3394.8516001750913, 00:09:15.512 "min_latency_us": 2512.213333333333, 00:09:15.512 "max_latency_us": 8107.045925925926 00:09:15.512 } 00:09:15.512 ], 00:09:15.512 "core_count": 1 00:09:15.512 } 00:09:15.512 03:57:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 643732 00:09:15.512 03:57:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 643732 ']' 00:09:15.512 03:57:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 643732 00:09:15.512 03:57:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:15.512 03:57:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.512 03:57:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 643732 00:09:15.771 03:57:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:15.771 03:57:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:15.771 03:57:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 643732' 00:09:15.771 killing process with pid 643732 00:09:15.771 03:57:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 643732 00:09:15.771 Received shutdown signal, test time was about 10.000000 seconds 00:09:15.771 00:09:15.771 Latency(us) 00:09:15.771 [2024-12-10T02:57:10.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.771 [2024-12-10T02:57:10.160Z] =================================================================================================================== 00:09:15.771 [2024-12-10T02:57:10.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:15.771 03:57:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 643732 00:09:15.771 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:16.030 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:16.289 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bec5d9fb-5f19-4f03-bcc1-194a1425b454 00:09:16.289 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:16.289 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:16.289 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:16.289 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 640462 00:09:16.289 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 640462 00:09:16.289 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 640462 Killed "${NVMF_APP[@]}" "$@" 00:09:16.289 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:16.289 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:16.289 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:16.289 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.289 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:16.549 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=646481 00:09:16.549 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 646481 00:09:16.549 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:16.549 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 646481 ']' 00:09:16.549 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.549 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.549 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.549 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.549 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:16.549 [2024-12-10 03:57:10.723866] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:16.549 [2024-12-10 03:57:10.723911] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.549 [2024-12-10 03:57:10.782028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.549 [2024-12-10 03:57:10.820072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.549 [2024-12-10 03:57:10.820104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.549 [2024-12-10 03:57:10.820111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.549 [2024-12-10 03:57:10.820116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.549 [2024-12-10 03:57:10.820121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.549 [2024-12-10 03:57:10.820596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.549 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.549 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:16.549 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:16.549 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:16.549 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:16.808 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.808 03:57:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:16.808 [2024-12-10 03:57:11.110585] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:16.808 [2024-12-10 03:57:11.110663] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:16.808 [2024-12-10 03:57:11.110687] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:16.808 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:16.808 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8e59f083-a760-4247-909d-3ca769c785a5 00:09:16.808 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8e59f083-a760-4247-909d-3ca769c785a5 00:09:16.808 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.808 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:16.809 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.809 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.809 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:17.068 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8e59f083-a760-4247-909d-3ca769c785a5 -t 2000 00:09:17.327 [ 00:09:17.327 { 00:09:17.327 "name": "8e59f083-a760-4247-909d-3ca769c785a5", 00:09:17.327 "aliases": [ 00:09:17.327 "lvs/lvol" 00:09:17.327 ], 00:09:17.327 "product_name": "Logical Volume", 00:09:17.327 "block_size": 4096, 00:09:17.327 "num_blocks": 38912, 00:09:17.327 "uuid": "8e59f083-a760-4247-909d-3ca769c785a5", 00:09:17.327 "assigned_rate_limits": { 00:09:17.327 "rw_ios_per_sec": 0, 00:09:17.327 "rw_mbytes_per_sec": 0, 00:09:17.327 "r_mbytes_per_sec": 0, 00:09:17.327 "w_mbytes_per_sec": 0 00:09:17.327 }, 00:09:17.327 "claimed": false, 00:09:17.327 "zoned": false, 00:09:17.327 "supported_io_types": { 00:09:17.327 "read": true, 00:09:17.327 "write": true, 00:09:17.327 "unmap": true, 00:09:17.327 "flush": false, 00:09:17.327 "reset": true, 00:09:17.327 "nvme_admin": false, 00:09:17.327 "nvme_io": false, 00:09:17.327 "nvme_io_md": false, 00:09:17.327 "write_zeroes": true, 00:09:17.327 "zcopy": false, 00:09:17.327 "get_zone_info": false, 00:09:17.327 "zone_management": false, 00:09:17.327 "zone_append": false, 00:09:17.327 "compare": false, 00:09:17.327 "compare_and_write": false, 00:09:17.327 "abort": false, 00:09:17.327 "seek_hole": true, 00:09:17.327 "seek_data": true, 00:09:17.327 "copy": false, 00:09:17.327 "nvme_iov_md": false 00:09:17.327 }, 00:09:17.327 "driver_specific": { 00:09:17.327 "lvol": { 00:09:17.327 "lvol_store_uuid": "bec5d9fb-5f19-4f03-bcc1-194a1425b454", 00:09:17.327 "base_bdev": "aio_bdev", 00:09:17.327 "thin_provision": false, 00:09:17.327 "num_allocated_clusters": 38, 00:09:17.327 "snapshot": false, 00:09:17.327 "clone": false, 00:09:17.327 "esnap_clone": false 00:09:17.327 } 00:09:17.327 } 00:09:17.327 } 00:09:17.327 ] 00:09:17.327 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:17.327 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bec5d9fb-5f19-4f03-bcc1-194a1425b454 00:09:17.327 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:17.327 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:17.327 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bec5d9fb-5f19-4f03-bcc1-194a1425b454 00:09:17.327 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:17.586 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:17.586 03:57:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:17.845 [2024-12-10 03:57:12.003224] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bec5d9fb-5f19-4f03-bcc1-194a1425b454 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bec5d9fb-5f19-4f03-bcc1-194a1425b454 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bec5d9fb-5f19-4f03-bcc1-194a1425b454 00:09:17.845 request: 00:09:17.845 { 00:09:17.845 "uuid": "bec5d9fb-5f19-4f03-bcc1-194a1425b454", 00:09:17.845 "method": "bdev_lvol_get_lvstores", 00:09:17.845 "req_id": 1 00:09:17.845 } 00:09:17.845 Got JSON-RPC error response 00:09:17.845 response: 00:09:17.845 { 00:09:17.845 "code": -19, 00:09:17.845 "message": "No such device" 00:09:17.845 } 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:17.845 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.104 aio_bdev 00:09:18.104 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8e59f083-a760-4247-909d-3ca769c785a5 00:09:18.104 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8e59f083-a760-4247-909d-3ca769c785a5 00:09:18.104 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.104 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:18.104 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.104 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.104 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:18.363 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8e59f083-a760-4247-909d-3ca769c785a5 -t 2000 00:09:18.363 [ 00:09:18.363 { 00:09:18.363 "name": "8e59f083-a760-4247-909d-3ca769c785a5", 00:09:18.363 "aliases": [ 00:09:18.363 "lvs/lvol" 00:09:18.363 ], 00:09:18.363 "product_name": "Logical Volume", 00:09:18.363 "block_size": 4096, 00:09:18.363 "num_blocks": 38912, 00:09:18.363 "uuid": "8e59f083-a760-4247-909d-3ca769c785a5", 00:09:18.363 "assigned_rate_limits": { 00:09:18.363 "rw_ios_per_sec": 0, 00:09:18.363 "rw_mbytes_per_sec": 0, 00:09:18.363 "r_mbytes_per_sec": 0, 00:09:18.363 "w_mbytes_per_sec": 0 00:09:18.363 }, 00:09:18.363 "claimed": false, 00:09:18.363 "zoned": false, 00:09:18.363 "supported_io_types": { 00:09:18.363 "read": true, 00:09:18.363 "write": true, 00:09:18.363 "unmap": true, 00:09:18.363 "flush": false, 00:09:18.363 "reset": true, 00:09:18.363 "nvme_admin": false, 00:09:18.363 "nvme_io": false, 00:09:18.363 "nvme_io_md": false, 00:09:18.363 "write_zeroes": true, 00:09:18.363 "zcopy": false, 00:09:18.363 "get_zone_info": false, 00:09:18.363 "zone_management": false, 00:09:18.363 "zone_append": false, 00:09:18.363 "compare": false, 00:09:18.363 "compare_and_write": false, 00:09:18.363 "abort": false, 00:09:18.363 "seek_hole": true, 00:09:18.363 "seek_data": true, 00:09:18.363 "copy": false, 00:09:18.363 "nvme_iov_md": false 00:09:18.363 }, 00:09:18.363 "driver_specific": { 00:09:18.363 "lvol": { 00:09:18.363 "lvol_store_uuid": "bec5d9fb-5f19-4f03-bcc1-194a1425b454", 00:09:18.363 "base_bdev": "aio_bdev", 00:09:18.363 "thin_provision": false, 00:09:18.363 "num_allocated_clusters": 38, 00:09:18.363 "snapshot": false, 00:09:18.363 "clone": false, 00:09:18.363 "esnap_clone": false 00:09:18.363 } 00:09:18.363 } 00:09:18.363 } 00:09:18.363 ] 00:09:18.364 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:18.364 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bec5d9fb-5f19-4f03-bcc1-194a1425b454 00:09:18.364 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:18.622 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:18.623 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bec5d9fb-5f19-4f03-bcc1-194a1425b454 00:09:18.623 03:57:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:18.881 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:18.881 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8e59f083-a760-4247-909d-3ca769c785a5 00:09:18.881 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bec5d9fb-5f19-4f03-bcc1-194a1425b454 00:09:19.140 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:19.399 00:09:19.399 real 0m16.256s 00:09:19.399 user 0m43.074s 00:09:19.399 sys 0m2.729s 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:19.399 ************************************ 00:09:19.399 END TEST lvs_grow_dirty 00:09:19.399 ************************************ 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:19.399 nvmf_trace.0 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:19.399 rmmod nvme_rdma 00:09:19.399 rmmod nvme_fabrics 00:09:19.399 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.400 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:19.400 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:19.400 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 646481 ']' 00:09:19.400 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 646481 00:09:19.400 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 646481 ']' 00:09:19.400 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 646481 00:09:19.400 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:19.400 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.400 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 646481 00:09:19.659 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.659 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.659 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 646481' 00:09:19.659 killing process with pid 646481 00:09:19.659 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 646481 00:09:19.659 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 646481 00:09:19.659 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.659 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:19.659 00:09:19.659 real 0m38.286s 00:09:19.659 user 1m3.332s 00:09:19.659 sys 0m8.483s 00:09:19.659 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.659 03:57:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:19.659 ************************************ 00:09:19.659 END TEST nvmf_lvs_grow 00:09:19.659 ************************************ 00:09:19.659 03:57:14 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:19.659 03:57:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:19.659 03:57:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.659 03:57:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.659 ************************************ 00:09:19.659 START TEST nvmf_bdev_io_wait 00:09:19.659 ************************************ 00:09:19.659 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:19.919 * Looking for test storage... 00:09:19.919 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.919 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:19.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.919 --rc genhtml_branch_coverage=1 00:09:19.919 --rc genhtml_function_coverage=1 00:09:19.919 --rc genhtml_legend=1 00:09:19.919 --rc geninfo_all_blocks=1 00:09:19.920 --rc geninfo_unexecuted_blocks=1 00:09:19.920 00:09:19.920 ' 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:19.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.920 --rc genhtml_branch_coverage=1 00:09:19.920 --rc genhtml_function_coverage=1 00:09:19.920 --rc genhtml_legend=1 00:09:19.920 --rc geninfo_all_blocks=1 00:09:19.920 --rc geninfo_unexecuted_blocks=1 00:09:19.920 00:09:19.920 ' 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:19.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.920 --rc genhtml_branch_coverage=1 00:09:19.920 --rc genhtml_function_coverage=1 00:09:19.920 --rc genhtml_legend=1 00:09:19.920 --rc geninfo_all_blocks=1 00:09:19.920 --rc geninfo_unexecuted_blocks=1 00:09:19.920 00:09:19.920 ' 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:19.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.920 --rc genhtml_branch_coverage=1 00:09:19.920 --rc genhtml_function_coverage=1 00:09:19.920 --rc genhtml_legend=1 00:09:19.920 --rc geninfo_all_blocks=1 00:09:19.920 --rc geninfo_unexecuted_blocks=1 00:09:19.920 00:09:19.920 ' 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.920 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.920 03:57:14 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:26.491 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:26.492 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:26.492 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:26.492 Found net devices under 0000:18:00.0: mlx_0_0 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:26.492 Found net devices under 0000:18:00.1: mlx_0_1 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:26.492 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:26.492 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:26.492 altname enp24s0f0np0 00:09:26.492 altname ens785f0np0 00:09:26.492 inet 192.168.100.8/24 scope global mlx_0_0 00:09:26.492 valid_lft forever preferred_lft forever 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:26.492 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:26.492 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:26.492 altname enp24s0f1np1 00:09:26.492 altname ens785f1np1 00:09:26.492 inet 192.168.100.9/24 scope global mlx_0_1 00:09:26.492 valid_lft forever preferred_lft forever 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:26.492 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:26.493 192.168.100.9' 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:26.493 192.168.100.9' 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:26.493 192.168.100.9' 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=650579 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 650579 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 650579 ']' 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 [2024-12-10 03:57:20.292260] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:26.493 [2024-12-10 03:57:20.292313] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.493 [2024-12-10 03:57:20.351231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.493 [2024-12-10 03:57:20.390189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.493 [2024-12-10 03:57:20.390224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.493 [2024-12-10 03:57:20.390231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.493 [2024-12-10 03:57:20.390237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.493 [2024-12-10 03:57:20.390242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.493 [2024-12-10 03:57:20.391493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.493 [2024-12-10 03:57:20.391587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.493 [2024-12-10 03:57:20.391679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.493 [2024-12-10 03:57:20.391681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 [2024-12-10 03:57:20.565502] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23ec110/0x23f0600) succeed. 00:09:26.493 [2024-12-10 03:57:20.573969] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23ed7a0/0x2431ca0) succeed. 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 Malloc0 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 [2024-12-10 03:57:20.740228] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=650608 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=650610 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:26.493 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:26.493 { 00:09:26.493 "params": { 00:09:26.493 "name": "Nvme$subsystem", 00:09:26.493 "trtype": "$TEST_TRANSPORT", 00:09:26.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:26.493 "adrfam": "ipv4", 00:09:26.493 "trsvcid": "$NVMF_PORT", 00:09:26.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:26.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:26.494 "hdgst": ${hdgst:-false}, 00:09:26.494 "ddgst": ${ddgst:-false} 00:09:26.494 }, 00:09:26.494 "method": "bdev_nvme_attach_controller" 00:09:26.494 } 00:09:26.494 EOF 00:09:26.494 )") 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=650612 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=650615 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:26.494 { 00:09:26.494 "params": { 00:09:26.494 "name": "Nvme$subsystem", 00:09:26.494 "trtype": "$TEST_TRANSPORT", 00:09:26.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:26.494 "adrfam": "ipv4", 00:09:26.494 "trsvcid": "$NVMF_PORT", 00:09:26.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:26.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:26.494 "hdgst": ${hdgst:-false}, 00:09:26.494 "ddgst": ${ddgst:-false} 00:09:26.494 }, 00:09:26.494 "method": "bdev_nvme_attach_controller" 00:09:26.494 } 00:09:26.494 EOF 00:09:26.494 )") 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:26.494 { 00:09:26.494 "params": { 00:09:26.494 "name": "Nvme$subsystem", 00:09:26.494 "trtype": "$TEST_TRANSPORT", 00:09:26.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:26.494 "adrfam": "ipv4", 00:09:26.494 "trsvcid": "$NVMF_PORT", 00:09:26.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:26.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:26.494 "hdgst": ${hdgst:-false}, 00:09:26.494 "ddgst": ${ddgst:-false} 00:09:26.494 }, 00:09:26.494 "method": "bdev_nvme_attach_controller" 00:09:26.494 } 00:09:26.494 EOF 00:09:26.494 )") 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:26.494 { 00:09:26.494 "params": { 00:09:26.494 "name": "Nvme$subsystem", 00:09:26.494 "trtype": "$TEST_TRANSPORT", 00:09:26.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:26.494 "adrfam": "ipv4", 00:09:26.494 "trsvcid": "$NVMF_PORT", 00:09:26.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:26.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:26.494 "hdgst": ${hdgst:-false}, 00:09:26.494 "ddgst": ${ddgst:-false} 00:09:26.494 }, 00:09:26.494 "method": "bdev_nvme_attach_controller" 00:09:26.494 } 00:09:26.494 EOF 00:09:26.494 )") 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 650608 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:26.494 "params": { 00:09:26.494 "name": "Nvme1", 00:09:26.494 "trtype": "rdma", 00:09:26.494 "traddr": "192.168.100.8", 00:09:26.494 "adrfam": "ipv4", 00:09:26.494 "trsvcid": "4420", 00:09:26.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:26.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:26.494 "hdgst": false, 00:09:26.494 "ddgst": false 00:09:26.494 }, 00:09:26.494 "method": "bdev_nvme_attach_controller" 00:09:26.494 }' 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:26.494 "params": { 00:09:26.494 "name": "Nvme1", 00:09:26.494 "trtype": "rdma", 00:09:26.494 "traddr": "192.168.100.8", 00:09:26.494 "adrfam": "ipv4", 00:09:26.494 "trsvcid": "4420", 00:09:26.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:26.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:26.494 "hdgst": false, 00:09:26.494 "ddgst": false 00:09:26.494 }, 00:09:26.494 "method": "bdev_nvme_attach_controller" 00:09:26.494 }' 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:26.494 "params": { 00:09:26.494 "name": "Nvme1", 00:09:26.494 "trtype": "rdma", 00:09:26.494 "traddr": "192.168.100.8", 00:09:26.494 "adrfam": "ipv4", 00:09:26.494 "trsvcid": "4420", 00:09:26.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:26.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:26.494 "hdgst": false, 00:09:26.494 "ddgst": false 00:09:26.494 }, 00:09:26.494 "method": "bdev_nvme_attach_controller" 00:09:26.494 }' 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:26.494 03:57:20 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:26.494 "params": { 00:09:26.494 "name": "Nvme1", 00:09:26.494 "trtype": "rdma", 00:09:26.494 "traddr": "192.168.100.8", 00:09:26.494 "adrfam": "ipv4", 00:09:26.494 "trsvcid": "4420", 00:09:26.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:26.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:26.494 "hdgst": false, 00:09:26.494 "ddgst": false 00:09:26.494 }, 00:09:26.494 "method": "bdev_nvme_attach_controller" 00:09:26.494 }' 00:09:26.494 [2024-12-10 03:57:20.790000] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:26.494 [2024-12-10 03:57:20.790046] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:26.494 [2024-12-10 03:57:20.791847] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:26.494 [2024-12-10 03:57:20.791888] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:26.494 [2024-12-10 03:57:20.793498] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:26.494 [2024-12-10 03:57:20.793539] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:26.494 [2024-12-10 03:57:20.793990] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:26.494 [2024-12-10 03:57:20.794027] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:26.754 [2024-12-10 03:57:20.967205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.754 [2024-12-10 03:57:21.007535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:26.754 [2024-12-10 03:57:21.054775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.754 [2024-12-10 03:57:21.107071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:26.754 [2024-12-10 03:57:21.114251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.013 [2024-12-10 03:57:21.155130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:27.013 [2024-12-10 03:57:21.175689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.013 [2024-12-10 03:57:21.215257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:27.013 Running I/O for 1 seconds... 00:09:27.013 Running I/O for 1 seconds... 00:09:27.013 Running I/O for 1 seconds... 00:09:27.013 Running I/O for 1 seconds... 00:09:27.951 16008.00 IOPS, 62.53 MiB/s [2024-12-10T02:57:22.340Z] 269576.00 IOPS, 1053.03 MiB/s [2024-12-10T02:57:22.340Z] 20682.00 IOPS, 80.79 MiB/s 00:09:27.951 Latency(us) 00:09:27.951 [2024-12-10T02:57:22.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.951 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:27.951 Nvme1n1 : 1.00 269205.77 1051.59 0.00 0.00 473.42 201.77 1917.53 00:09:27.951 [2024-12-10T02:57:22.340Z] =================================================================================================================== 00:09:27.951 [2024-12-10T02:57:22.340Z] Total : 269205.77 1051.59 0.00 0.00 473.42 201.77 1917.53 00:09:27.951 00:09:27.951 Latency(us) 00:09:27.951 [2024-12-10T02:57:22.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.951 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:27.951 Nvme1n1 : 1.01 20722.98 80.95 0.00 0.00 6160.82 3810.80 16214.09 00:09:27.951 [2024-12-10T02:57:22.340Z] =================================================================================================================== 00:09:27.951 [2024-12-10T02:57:22.340Z] Total : 20722.98 80.95 0.00 0.00 6160.82 3810.80 16214.09 00:09:27.951 00:09:27.951 Latency(us) 00:09:27.951 [2024-12-10T02:57:22.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.951 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:27.951 Nvme1n1 : 1.01 16047.40 62.69 0.00 0.00 7950.85 5072.97 17670.45 00:09:27.951 [2024-12-10T02:57:22.340Z] =================================================================================================================== 00:09:27.951 [2024-12-10T02:57:22.340Z] Total : 16047.40 62.69 0.00 0.00 7950.85 5072.97 17670.45 00:09:28.211 15741.00 IOPS, 61.49 MiB/s 00:09:28.211 Latency(us) 00:09:28.211 [2024-12-10T02:57:22.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.211 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:28.211 Nvme1n1 : 1.01 15832.17 61.84 0.00 0.00 8066.45 3046.21 19029.71 00:09:28.211 [2024-12-10T02:57:22.600Z] =================================================================================================================== 00:09:28.211 [2024-12-10T02:57:22.600Z] Total : 15832.17 61.84 0.00 0.00 8066.45 3046.21 19029.71 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 650610 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 650612 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 650615 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:28.211 rmmod nvme_rdma 00:09:28.211 rmmod nvme_fabrics 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 650579 ']' 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 650579 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 650579 ']' 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 650579 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.211 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 650579 00:09:28.470 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.470 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.470 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 650579' 00:09:28.470 killing process with pid 650579 00:09:28.470 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 650579 00:09:28.470 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 650579 00:09:28.470 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:28.470 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:28.470 00:09:28.470 real 0m8.771s 00:09:28.470 user 0m16.535s 00:09:28.470 sys 0m5.725s 00:09:28.470 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.470 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:28.470 ************************************ 00:09:28.470 END TEST nvmf_bdev_io_wait 00:09:28.470 ************************************ 00:09:28.470 03:57:22 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:28.470 03:57:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:28.470 03:57:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.470 03:57:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.730 ************************************ 00:09:28.730 START TEST nvmf_queue_depth 00:09:28.730 ************************************ 00:09:28.730 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:28.730 * Looking for test storage... 00:09:28.730 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:28.730 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:28.730 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:28.730 03:57:22 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:28.730 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:28.730 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.730 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.730 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.730 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.730 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.731 --rc genhtml_branch_coverage=1 00:09:28.731 --rc genhtml_function_coverage=1 00:09:28.731 --rc genhtml_legend=1 00:09:28.731 --rc geninfo_all_blocks=1 00:09:28.731 --rc geninfo_unexecuted_blocks=1 00:09:28.731 00:09:28.731 ' 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.731 --rc genhtml_branch_coverage=1 00:09:28.731 --rc genhtml_function_coverage=1 00:09:28.731 --rc genhtml_legend=1 00:09:28.731 --rc geninfo_all_blocks=1 00:09:28.731 --rc geninfo_unexecuted_blocks=1 00:09:28.731 00:09:28.731 ' 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.731 --rc genhtml_branch_coverage=1 00:09:28.731 --rc genhtml_function_coverage=1 00:09:28.731 --rc genhtml_legend=1 00:09:28.731 --rc geninfo_all_blocks=1 00:09:28.731 --rc geninfo_unexecuted_blocks=1 00:09:28.731 00:09:28.731 ' 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.731 --rc genhtml_branch_coverage=1 00:09:28.731 --rc genhtml_function_coverage=1 00:09:28.731 --rc genhtml_legend=1 00:09:28.731 --rc geninfo_all_blocks=1 00:09:28.731 --rc geninfo_unexecuted_blocks=1 00:09:28.731 00:09:28.731 ' 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.731 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:28.731 03:57:23 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:35.305 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:35.305 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:35.305 Found net devices under 0000:18:00.0: mlx_0_0 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:35.305 Found net devices under 0000:18:00.1: mlx_0_1 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:35.305 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:35.306 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:35.306 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:35.306 altname enp24s0f0np0 00:09:35.306 altname ens785f0np0 00:09:35.306 inet 192.168.100.8/24 scope global mlx_0_0 00:09:35.306 valid_lft forever preferred_lft forever 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:35.306 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:35.306 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:35.306 altname enp24s0f1np1 00:09:35.306 altname ens785f1np1 00:09:35.306 inet 192.168.100.9/24 scope global mlx_0_1 00:09:35.306 valid_lft forever preferred_lft forever 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:35.306 192.168.100.9' 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:35.306 192.168.100.9' 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:35.306 192.168.100.9' 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=654405 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 654405 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 654405 ']' 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.306 03:57:28 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.307 [2024-12-10 03:57:28.840188] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:35.307 [2024-12-10 03:57:28.840234] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.307 [2024-12-10 03:57:28.899452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.307 [2024-12-10 03:57:28.937208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.307 [2024-12-10 03:57:28.937240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.307 [2024-12-10 03:57:28.937246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.307 [2024-12-10 03:57:28.937252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.307 [2024-12-10 03:57:28.937256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.307 [2024-12-10 03:57:28.937739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.307 [2024-12-10 03:57:29.087357] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aab0c0/0x1aaf5b0) succeed. 00:09:35.307 [2024-12-10 03:57:29.095136] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aac570/0x1af0c50) succeed. 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.307 Malloc0 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.307 [2024-12-10 03:57:29.176773] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=654433 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 654433 /var/tmp/bdevperf.sock 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 654433 ']' 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:35.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.307 [2024-12-10 03:57:29.223349] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:35.307 [2024-12-10 03:57:29.223389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid654433 ] 00:09:35.307 [2024-12-10 03:57:29.280634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.307 [2024-12-10 03:57:29.318425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.307 NVMe0n1 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.307 03:57:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:35.307 Running I/O for 10 seconds... 00:09:37.624 17660.00 IOPS, 68.98 MiB/s [2024-12-10T02:57:32.950Z] 18348.50 IOPS, 71.67 MiB/s [2024-12-10T02:57:33.887Z] 18432.00 IOPS, 72.00 MiB/s [2024-12-10T02:57:34.824Z] 18536.00 IOPS, 72.41 MiB/s [2024-12-10T02:57:35.761Z] 18628.60 IOPS, 72.77 MiB/s [2024-12-10T02:57:36.698Z] 18602.67 IOPS, 72.67 MiB/s [2024-12-10T02:57:37.635Z] 18679.29 IOPS, 72.97 MiB/s [2024-12-10T02:57:39.013Z] 18688.00 IOPS, 73.00 MiB/s [2024-12-10T02:57:39.950Z] 18675.22 IOPS, 72.95 MiB/s [2024-12-10T02:57:39.950Z] 18724.00 IOPS, 73.14 MiB/s 00:09:45.561 Latency(us) 00:09:45.561 [2024-12-10T02:57:39.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.561 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:45.561 Verification LBA range: start 0x0 length 0x4000 00:09:45.561 NVMe0n1 : 10.04 18737.52 73.19 0.00 0.00 54495.76 14951.92 35146.71 00:09:45.561 [2024-12-10T02:57:39.950Z] =================================================================================================================== 00:09:45.561 [2024-12-10T02:57:39.950Z] Total : 18737.52 73.19 0.00 0.00 54495.76 14951.92 35146.71 00:09:45.561 { 00:09:45.561 "results": [ 00:09:45.561 { 00:09:45.561 "job": "NVMe0n1", 00:09:45.561 "core_mask": "0x1", 00:09:45.561 "workload": "verify", 00:09:45.561 "status": "finished", 00:09:45.561 "verify_range": { 00:09:45.561 "start": 0, 00:09:45.561 "length": 16384 00:09:45.561 }, 00:09:45.561 "queue_depth": 1024, 00:09:45.561 "io_size": 4096, 00:09:45.561 "runtime": 10.038947, 00:09:45.561 "iops": 18737.5229692915, 00:09:45.561 "mibps": 73.19344909879493, 00:09:45.561 "io_failed": 0, 00:09:45.561 "io_timeout": 0, 00:09:45.561 "avg_latency_us": 54495.757232892975, 00:09:45.561 "min_latency_us": 14951.917037037038, 00:09:45.561 "max_latency_us": 35146.71407407407 00:09:45.561 } 00:09:45.561 ], 00:09:45.561 "core_count": 1 00:09:45.561 } 00:09:45.561 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 654433 00:09:45.561 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 654433 ']' 00:09:45.561 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 654433 00:09:45.561 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:45.561 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.561 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 654433 00:09:45.561 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.561 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.561 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 654433' 00:09:45.561 killing process with pid 654433 00:09:45.561 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 654433 00:09:45.561 Received shutdown signal, test time was about 10.000000 seconds 00:09:45.561 00:09:45.561 Latency(us) 00:09:45.561 [2024-12-10T02:57:39.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.561 [2024-12-10T02:57:39.950Z] =================================================================================================================== 00:09:45.561 [2024-12-10T02:57:39.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:45.562 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 654433 00:09:45.562 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:45.562 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:45.562 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.562 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:45.562 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:45.562 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:45.562 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:45.562 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.562 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:45.562 rmmod nvme_rdma 00:09:45.562 rmmod nvme_fabrics 00:09:45.821 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.821 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:45.821 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:45.821 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 654405 ']' 00:09:45.821 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 654405 00:09:45.821 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 654405 ']' 00:09:45.821 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 654405 00:09:45.821 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:45.821 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.821 03:57:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 654405 00:09:45.821 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:45.821 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:45.821 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 654405' 00:09:45.822 killing process with pid 654405 00:09:45.822 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 654405 00:09:45.822 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 654405 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:46.081 00:09:46.081 real 0m17.341s 00:09:46.081 user 0m23.817s 00:09:46.081 sys 0m4.913s 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.081 ************************************ 00:09:46.081 END TEST nvmf_queue_depth 00:09:46.081 ************************************ 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.081 ************************************ 00:09:46.081 START TEST nvmf_target_multipath 00:09:46.081 ************************************ 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:09:46.081 * Looking for test storage... 00:09:46.081 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:46.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.081 --rc genhtml_branch_coverage=1 00:09:46.081 --rc genhtml_function_coverage=1 00:09:46.081 --rc genhtml_legend=1 00:09:46.081 --rc geninfo_all_blocks=1 00:09:46.081 --rc geninfo_unexecuted_blocks=1 00:09:46.081 00:09:46.081 ' 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:46.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.081 --rc genhtml_branch_coverage=1 00:09:46.081 --rc genhtml_function_coverage=1 00:09:46.081 --rc genhtml_legend=1 00:09:46.081 --rc geninfo_all_blocks=1 00:09:46.081 --rc geninfo_unexecuted_blocks=1 00:09:46.081 00:09:46.081 ' 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:46.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.081 --rc genhtml_branch_coverage=1 00:09:46.081 --rc genhtml_function_coverage=1 00:09:46.081 --rc genhtml_legend=1 00:09:46.081 --rc geninfo_all_blocks=1 00:09:46.081 --rc geninfo_unexecuted_blocks=1 00:09:46.081 00:09:46.081 ' 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:46.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.081 --rc genhtml_branch_coverage=1 00:09:46.081 --rc genhtml_function_coverage=1 00:09:46.081 --rc genhtml_legend=1 00:09:46.081 --rc geninfo_all_blocks=1 00:09:46.081 --rc geninfo_unexecuted_blocks=1 00:09:46.081 00:09:46.081 ' 00:09:46.081 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.341 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.342 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:46.342 03:57:40 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:51.749 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.749 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.749 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.749 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.749 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.749 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.749 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.749 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:51.750 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:51.750 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:51.750 Found net devices under 0000:18:00.0: mlx_0_0 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:51.750 Found net devices under 0000:18:00.1: mlx_0_1 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:51.750 03:57:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:51.750 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:51.750 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:51.750 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:51.750 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:51.751 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:51.751 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:51.751 altname enp24s0f0np0 00:09:51.751 altname ens785f0np0 00:09:51.751 inet 192.168.100.8/24 scope global mlx_0_0 00:09:51.751 valid_lft forever preferred_lft forever 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:51.751 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:51.751 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:51.751 altname enp24s0f1np1 00:09:51.751 altname ens785f1np1 00:09:51.751 inet 192.168.100.9/24 scope global mlx_0_1 00:09:51.751 valid_lft forever preferred_lft forever 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:51.751 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:52.012 192.168.100.9' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:52.012 192.168.100.9' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:52.012 192.168.100.9' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:09:52.012 run this test only with TCP transport for now 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:52.012 rmmod nvme_rdma 00:09:52.012 rmmod nvme_fabrics 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:52.012 00:09:52.012 real 0m5.954s 00:09:52.012 user 0m1.757s 00:09:52.012 sys 0m4.346s 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:52.012 ************************************ 00:09:52.012 END TEST nvmf_target_multipath 00:09:52.012 ************************************ 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.012 03:57:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.012 ************************************ 00:09:52.012 START TEST nvmf_zcopy 00:09:52.013 ************************************ 00:09:52.013 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:52.013 * Looking for test storage... 00:09:52.273 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:52.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.273 --rc genhtml_branch_coverage=1 00:09:52.273 --rc genhtml_function_coverage=1 00:09:52.273 --rc genhtml_legend=1 00:09:52.273 --rc geninfo_all_blocks=1 00:09:52.273 --rc geninfo_unexecuted_blocks=1 00:09:52.273 00:09:52.273 ' 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:52.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.273 --rc genhtml_branch_coverage=1 00:09:52.273 --rc genhtml_function_coverage=1 00:09:52.273 --rc genhtml_legend=1 00:09:52.273 --rc geninfo_all_blocks=1 00:09:52.273 --rc geninfo_unexecuted_blocks=1 00:09:52.273 00:09:52.273 ' 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:52.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.273 --rc genhtml_branch_coverage=1 00:09:52.273 --rc genhtml_function_coverage=1 00:09:52.273 --rc genhtml_legend=1 00:09:52.273 --rc geninfo_all_blocks=1 00:09:52.273 --rc geninfo_unexecuted_blocks=1 00:09:52.273 00:09:52.273 ' 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:52.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.273 --rc genhtml_branch_coverage=1 00:09:52.273 --rc genhtml_function_coverage=1 00:09:52.273 --rc genhtml_legend=1 00:09:52.273 --rc geninfo_all_blocks=1 00:09:52.273 --rc geninfo_unexecuted_blocks=1 00:09:52.273 00:09:52.273 ' 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.273 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.274 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:52.274 03:57:46 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.843 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:58.843 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:58.844 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:58.844 Found net devices under 0000:18:00.0: mlx_0_0 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:58.844 Found net devices under 0000:18:00.1: mlx_0_1 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:58.844 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:58.844 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:58.844 altname enp24s0f0np0 00:09:58.844 altname ens785f0np0 00:09:58.844 inet 192.168.100.8/24 scope global mlx_0_0 00:09:58.844 valid_lft forever preferred_lft forever 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:58.844 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:58.844 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:58.844 altname enp24s0f1np1 00:09:58.844 altname ens785f1np1 00:09:58.844 inet 192.168.100.9/24 scope global mlx_0_1 00:09:58.844 valid_lft forever preferred_lft forever 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:58.844 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:58.845 192.168.100.9' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:58.845 192.168.100.9' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:58.845 192.168.100.9' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=663039 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 663039 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 663039 ']' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.845 [2024-12-10 03:57:52.253870] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:09:58.845 [2024-12-10 03:57:52.253912] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.845 [2024-12-10 03:57:52.313665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.845 [2024-12-10 03:57:52.349467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.845 [2024-12-10 03:57:52.349499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.845 [2024-12-10 03:57:52.349507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.845 [2024-12-10 03:57:52.349512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.845 [2024-12-10 03:57:52.349516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.845 [2024-12-10 03:57:52.349967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:09:58.845 Unsupported transport: rdma 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:58.845 nvmf_trace.0 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:58.845 rmmod nvme_rdma 00:09:58.845 rmmod nvme_fabrics 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 663039 ']' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 663039 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 663039 ']' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 663039 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 663039 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 663039' 00:09:58.845 killing process with pid 663039 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 663039 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 663039 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:58.845 00:09:58.845 real 0m6.460s 00:09:58.845 user 0m2.399s 00:09:58.845 sys 0m4.560s 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.845 ************************************ 00:09:58.845 END TEST nvmf_zcopy 00:09:58.845 ************************************ 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.845 ************************************ 00:09:58.845 START TEST nvmf_nmic 00:09:58.845 ************************************ 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:58.845 * Looking for test storage... 00:09:58.845 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.845 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:58.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.846 --rc genhtml_branch_coverage=1 00:09:58.846 --rc genhtml_function_coverage=1 00:09:58.846 --rc genhtml_legend=1 00:09:58.846 --rc geninfo_all_blocks=1 00:09:58.846 --rc geninfo_unexecuted_blocks=1 00:09:58.846 00:09:58.846 ' 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:58.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.846 --rc genhtml_branch_coverage=1 00:09:58.846 --rc genhtml_function_coverage=1 00:09:58.846 --rc genhtml_legend=1 00:09:58.846 --rc geninfo_all_blocks=1 00:09:58.846 --rc geninfo_unexecuted_blocks=1 00:09:58.846 00:09:58.846 ' 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:58.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.846 --rc genhtml_branch_coverage=1 00:09:58.846 --rc genhtml_function_coverage=1 00:09:58.846 --rc genhtml_legend=1 00:09:58.846 --rc geninfo_all_blocks=1 00:09:58.846 --rc geninfo_unexecuted_blocks=1 00:09:58.846 00:09:58.846 ' 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:58.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.846 --rc genhtml_branch_coverage=1 00:09:58.846 --rc genhtml_function_coverage=1 00:09:58.846 --rc genhtml_legend=1 00:09:58.846 --rc geninfo_all_blocks=1 00:09:58.846 --rc geninfo_unexecuted_blocks=1 00:09:58.846 00:09:58.846 ' 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:58.846 03:57:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.846 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:58.846 03:57:53 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:04.119 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:04.119 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:04.119 Found net devices under 0000:18:00.0: mlx_0_0 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:04.119 Found net devices under 0000:18:00.1: mlx_0_1 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:04.119 03:57:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:04.119 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:04.120 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:04.120 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:10:04.120 altname enp24s0f0np0 00:10:04.120 altname ens785f0np0 00:10:04.120 inet 192.168.100.8/24 scope global mlx_0_0 00:10:04.120 valid_lft forever preferred_lft forever 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:04.120 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:04.120 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:10:04.120 altname enp24s0f1np1 00:10:04.120 altname ens785f1np1 00:10:04.120 inet 192.168.100.9/24 scope global mlx_0_1 00:10:04.120 valid_lft forever preferred_lft forever 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:04.120 192.168.100.9' 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:04.120 192.168.100.9' 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:04.120 192.168.100.9' 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=666318 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 666318 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 666318 ']' 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:04.120 [2024-12-10 03:57:58.239618] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:10:04.120 [2024-12-10 03:57:58.239661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.120 [2024-12-10 03:57:58.297242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.120 [2024-12-10 03:57:58.337265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.120 [2024-12-10 03:57:58.337304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.120 [2024-12-10 03:57:58.337311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.120 [2024-12-10 03:57:58.337316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.120 [2024-12-10 03:57:58.337321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.120 [2024-12-10 03:57:58.338707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.120 [2024-12-10 03:57:58.338802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.120 [2024-12-10 03:57:58.338894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.120 [2024-12-10 03:57:58.338896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.120 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.120 [2024-12-10 03:57:58.493732] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe1b0c0/0xe1f5b0) succeed. 00:10:04.120 [2024-12-10 03:57:58.502043] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe1c750/0xe60c50) succeed. 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.380 Malloc0 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.380 [2024-12-10 03:57:58.676677] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:04.380 test case1: single bdev can't be used in multiple subsystems 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.380 [2024-12-10 03:57:58.700459] bdev.c:8511:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:04.380 [2024-12-10 03:57:58.700477] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:04.380 [2024-12-10 03:57:58.700483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.380 request: 00:10:04.380 { 00:10:04.380 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:04.380 "namespace": { 00:10:04.380 "bdev_name": "Malloc0", 00:10:04.380 "no_auto_visible": false, 00:10:04.380 "hide_metadata": false 00:10:04.380 }, 00:10:04.380 "method": "nvmf_subsystem_add_ns", 00:10:04.380 "req_id": 1 00:10:04.380 } 00:10:04.380 Got JSON-RPC error response 00:10:04.380 response: 00:10:04.380 { 00:10:04.380 "code": -32602, 00:10:04.380 "message": "Invalid parameters" 00:10:04.380 } 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:04.380 Adding namespace failed - expected result. 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:04.380 test case2: host connect to nvmf target in multiple paths 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.380 [2024-12-10 03:57:58.712509] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.380 03:57:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:05.756 03:57:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:10:06.691 03:58:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:06.691 03:58:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:06.691 03:58:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:06.691 03:58:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:06.691 03:58:00 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:08.623 03:58:02 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:08.623 03:58:02 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:08.623 03:58:02 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:08.623 03:58:02 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:08.623 03:58:02 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:08.623 03:58:02 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:08.623 03:58:02 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:08.623 [global] 00:10:08.623 thread=1 00:10:08.623 invalidate=1 00:10:08.623 rw=write 00:10:08.623 time_based=1 00:10:08.623 runtime=1 00:10:08.623 ioengine=libaio 00:10:08.623 direct=1 00:10:08.623 bs=4096 00:10:08.623 iodepth=1 00:10:08.623 norandommap=0 00:10:08.623 numjobs=1 00:10:08.623 00:10:08.623 verify_dump=1 00:10:08.623 verify_backlog=512 00:10:08.623 verify_state_save=0 00:10:08.623 do_verify=1 00:10:08.623 verify=crc32c-intel 00:10:08.623 [job0] 00:10:08.623 filename=/dev/nvme0n1 00:10:08.623 Could not set queue depth (nvme0n1) 00:10:08.885 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:08.885 fio-3.35 00:10:08.885 Starting 1 thread 00:10:10.252 00:10:10.252 job0: (groupid=0, jobs=1): err= 0: pid=667285: Tue Dec 10 03:58:04 2024 00:10:10.252 read: IOPS=7184, BW=28.1MiB/s (29.4MB/s)(28.1MiB/1001msec) 00:10:10.252 slat (nsec): min=6365, max=26466, avg=7156.15, stdev=716.09 00:10:10.252 clat (usec): min=46, max=457, avg=58.06, stdev=15.19 00:10:10.252 lat (usec): min=55, max=464, avg=65.21, stdev=15.20 00:10:10.252 clat percentiles (usec): 00:10:10.252 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 54], 00:10:10.252 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:10:10.252 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 62], 95.00th=[ 63], 00:10:10.252 | 99.00th=[ 122], 99.50th=[ 172], 99.90th=[ 229], 99.95th=[ 363], 00:10:10.252 | 99.99th=[ 457] 00:10:10.253 write: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec); 0 zone resets 00:10:10.253 slat (nsec): min=8605, max=39500, avg=9381.38, stdev=832.33 00:10:10.253 clat (usec): min=42, max=480, avg=55.87, stdev=14.58 00:10:10.253 lat (usec): min=55, max=489, avg=65.25, stdev=14.61 00:10:10.253 clat percentiles (usec): 00:10:10.253 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 50], 20.00th=[ 52], 00:10:10.253 | 30.00th=[ 53], 40.00th=[ 54], 50.00th=[ 55], 60.00th=[ 56], 00:10:10.253 | 70.00th=[ 57], 80.00th=[ 58], 90.00th=[ 60], 95.00th=[ 61], 00:10:10.253 | 99.00th=[ 117], 99.50th=[ 172], 99.90th=[ 225], 99.95th=[ 302], 00:10:10.253 | 99.99th=[ 482] 00:10:10.253 bw ( KiB/s): min=30512, max=30512, per=99.42%, avg=30512.00, stdev= 0.00, samples=1 00:10:10.253 iops : min= 7628, max= 7628, avg=7628.00, stdev= 0.00, samples=1 00:10:10.253 lat (usec) : 50=5.10%, 100=93.70%, 250=1.14%, 500=0.06% 00:10:10.253 cpu : usr=6.90%, sys=12.20%, ctx=14872, majf=0, minf=1 00:10:10.253 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.253 issued rwts: total=7192,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.253 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.253 00:10:10.253 Run status group 0 (all jobs): 00:10:10.253 READ: bw=28.1MiB/s (29.4MB/s), 28.1MiB/s-28.1MiB/s (29.4MB/s-29.4MB/s), io=28.1MiB (29.5MB), run=1001-1001msec 00:10:10.253 WRITE: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:10:10.253 00:10:10.253 Disk stats (read/write): 00:10:10.253 nvme0n1: ios=6706/6699, merge=0/0, ticks=347/357, in_queue=704, util=90.68% 00:10:10.253 03:58:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:12.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:12.145 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:12.145 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:12.145 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:12.145 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:12.146 rmmod nvme_rdma 00:10:12.146 rmmod nvme_fabrics 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 666318 ']' 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 666318 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 666318 ']' 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 666318 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 666318 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 666318' 00:10:12.146 killing process with pid 666318 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 666318 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 666318 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:12.146 00:10:12.146 real 0m13.675s 00:10:12.146 user 0m41.799s 00:10:12.146 sys 0m4.816s 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.146 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:12.146 ************************************ 00:10:12.146 END TEST nvmf_nmic 00:10:12.146 ************************************ 00:10:12.403 03:58:06 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:12.404 ************************************ 00:10:12.404 START TEST nvmf_fio_target 00:10:12.404 ************************************ 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:12.404 * Looking for test storage... 00:10:12.404 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:12.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.404 --rc genhtml_branch_coverage=1 00:10:12.404 --rc genhtml_function_coverage=1 00:10:12.404 --rc genhtml_legend=1 00:10:12.404 --rc geninfo_all_blocks=1 00:10:12.404 --rc geninfo_unexecuted_blocks=1 00:10:12.404 00:10:12.404 ' 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:12.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.404 --rc genhtml_branch_coverage=1 00:10:12.404 --rc genhtml_function_coverage=1 00:10:12.404 --rc genhtml_legend=1 00:10:12.404 --rc geninfo_all_blocks=1 00:10:12.404 --rc geninfo_unexecuted_blocks=1 00:10:12.404 00:10:12.404 ' 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:12.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.404 --rc genhtml_branch_coverage=1 00:10:12.404 --rc genhtml_function_coverage=1 00:10:12.404 --rc genhtml_legend=1 00:10:12.404 --rc geninfo_all_blocks=1 00:10:12.404 --rc geninfo_unexecuted_blocks=1 00:10:12.404 00:10:12.404 ' 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:12.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.404 --rc genhtml_branch_coverage=1 00:10:12.404 --rc genhtml_function_coverage=1 00:10:12.404 --rc genhtml_legend=1 00:10:12.404 --rc geninfo_all_blocks=1 00:10:12.404 --rc geninfo_unexecuted_blocks=1 00:10:12.404 00:10:12.404 ' 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.404 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.405 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:12.405 03:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:18.958 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:18.958 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:18.959 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:18.959 Found net devices under 0000:18:00.0: mlx_0_0 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:18.959 Found net devices under 0000:18:00.1: mlx_0_1 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:18.959 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:18.959 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:10:18.959 altname enp24s0f0np0 00:10:18.959 altname ens785f0np0 00:10:18.959 inet 192.168.100.8/24 scope global mlx_0_0 00:10:18.959 valid_lft forever preferred_lft forever 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:18.959 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:18.959 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:10:18.959 altname enp24s0f1np1 00:10:18.959 altname ens785f1np1 00:10:18.959 inet 192.168.100.9/24 scope global mlx_0_1 00:10:18.959 valid_lft forever preferred_lft forever 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:18.959 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:18.960 192.168.100.9' 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:18.960 192.168.100.9' 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:18.960 192.168.100.9' 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=671334 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 671334 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 671334 ']' 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.960 [2024-12-10 03:58:12.488878] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:10:18.960 [2024-12-10 03:58:12.488924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.960 [2024-12-10 03:58:12.549722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.960 [2024-12-10 03:58:12.589283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.960 [2024-12-10 03:58:12.589318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.960 [2024-12-10 03:58:12.589325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.960 [2024-12-10 03:58:12.589330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.960 [2024-12-10 03:58:12.589335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.960 [2024-12-10 03:58:12.590765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.960 [2024-12-10 03:58:12.590857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.960 [2024-12-10 03:58:12.590956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.960 [2024-12-10 03:58:12.590959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.960 03:58:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:18.960 [2024-12-10 03:58:12.904604] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12430c0/0x12475b0) succeed. 00:10:18.960 [2024-12-10 03:58:12.913127] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1244750/0x1288c50) succeed. 00:10:18.960 03:58:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.960 03:58:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:18.960 03:58:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.217 03:58:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:19.217 03:58:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.474 03:58:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:19.474 03:58:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.731 03:58:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:19.731 03:58:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:19.731 03:58:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.987 03:58:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:19.987 03:58:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.244 03:58:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:20.244 03:58:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.501 03:58:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:20.501 03:58:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:20.501 03:58:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:20.757 03:58:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:20.757 03:58:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:21.014 03:58:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:21.014 03:58:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:21.014 03:58:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:21.270 [2024-12-10 03:58:15.547731] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:21.270 03:58:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:21.527 03:58:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:21.784 03:58:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:22.713 03:58:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:22.713 03:58:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:22.713 03:58:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.713 03:58:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:22.713 03:58:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:22.713 03:58:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:24.605 03:58:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:24.605 03:58:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:24.605 03:58:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.605 03:58:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:24.605 03:58:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.605 03:58:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:24.605 03:58:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:24.605 [global] 00:10:24.605 thread=1 00:10:24.605 invalidate=1 00:10:24.605 rw=write 00:10:24.605 time_based=1 00:10:24.605 runtime=1 00:10:24.605 ioengine=libaio 00:10:24.605 direct=1 00:10:24.605 bs=4096 00:10:24.605 iodepth=1 00:10:24.605 norandommap=0 00:10:24.605 numjobs=1 00:10:24.605 00:10:24.605 verify_dump=1 00:10:24.605 verify_backlog=512 00:10:24.605 verify_state_save=0 00:10:24.605 do_verify=1 00:10:24.605 verify=crc32c-intel 00:10:24.605 [job0] 00:10:24.605 filename=/dev/nvme0n1 00:10:24.605 [job1] 00:10:24.605 filename=/dev/nvme0n2 00:10:24.605 [job2] 00:10:24.605 filename=/dev/nvme0n3 00:10:24.605 [job3] 00:10:24.605 filename=/dev/nvme0n4 00:10:24.884 Could not set queue depth (nvme0n1) 00:10:24.884 Could not set queue depth (nvme0n2) 00:10:24.884 Could not set queue depth (nvme0n3) 00:10:24.884 Could not set queue depth (nvme0n4) 00:10:25.147 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.147 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.147 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.147 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.147 fio-3.35 00:10:25.147 Starting 4 threads 00:10:26.522 00:10:26.522 job0: (groupid=0, jobs=1): err= 0: pid=672745: Tue Dec 10 03:58:20 2024 00:10:26.522 read: IOPS=3861, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1001msec) 00:10:26.522 slat (nsec): min=4946, max=26483, avg=7124.80, stdev=1084.34 00:10:26.522 clat (usec): min=63, max=348, avg=117.83, stdev=17.53 00:10:26.522 lat (usec): min=69, max=356, avg=124.95, stdev=17.80 00:10:26.522 clat percentiles (usec): 00:10:26.522 | 1.00th=[ 89], 5.00th=[ 95], 10.00th=[ 99], 20.00th=[ 105], 00:10:26.522 | 30.00th=[ 110], 40.00th=[ 113], 50.00th=[ 116], 60.00th=[ 118], 00:10:26.522 | 70.00th=[ 122], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 147], 00:10:26.522 | 99.00th=[ 169], 99.50th=[ 182], 99.90th=[ 237], 99.95th=[ 310], 00:10:26.522 | 99.99th=[ 351] 00:10:26.522 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:10:26.522 slat (nsec): min=6673, max=49720, avg=9344.62, stdev=1122.40 00:10:26.522 clat (usec): min=55, max=392, avg=112.84, stdev=17.55 00:10:26.522 lat (usec): min=64, max=401, avg=122.19, stdev=17.68 00:10:26.522 clat percentiles (usec): 00:10:26.522 | 1.00th=[ 80], 5.00th=[ 87], 10.00th=[ 91], 20.00th=[ 98], 00:10:26.522 | 30.00th=[ 103], 40.00th=[ 110], 50.00th=[ 114], 60.00th=[ 118], 00:10:26.522 | 70.00th=[ 122], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 139], 00:10:26.522 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 196], 99.95th=[ 269], 00:10:26.522 | 99.99th=[ 392] 00:10:26.522 bw ( KiB/s): min=16384, max=16384, per=24.97%, avg=16384.00, stdev= 0.00, samples=1 00:10:26.522 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:26.522 lat (usec) : 100=18.21%, 250=81.71%, 500=0.08% 00:10:26.522 cpu : usr=3.30%, sys=7.20%, ctx=7962, majf=0, minf=1 00:10:26.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.522 issued rwts: total=3865,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.522 job1: (groupid=0, jobs=1): err= 0: pid=672754: Tue Dec 10 03:58:20 2024 00:10:26.522 read: IOPS=3950, BW=15.4MiB/s (16.2MB/s)(15.4MiB/1001msec) 00:10:26.522 slat (nsec): min=6301, max=15886, avg=7168.14, stdev=629.44 00:10:26.522 clat (usec): min=66, max=230, avg=116.24, stdev=16.02 00:10:26.522 lat (usec): min=73, max=237, avg=123.41, stdev=16.07 00:10:26.522 clat percentiles (usec): 00:10:26.522 | 1.00th=[ 89], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 104], 00:10:26.522 | 30.00th=[ 109], 40.00th=[ 111], 50.00th=[ 114], 60.00th=[ 116], 00:10:26.522 | 70.00th=[ 120], 80.00th=[ 131], 90.00th=[ 141], 95.00th=[ 147], 00:10:26.522 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 204], 99.95th=[ 215], 00:10:26.522 | 99.99th=[ 231] 00:10:26.522 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:10:26.522 slat (nsec): min=7418, max=56917, avg=9468.33, stdev=1051.22 00:10:26.522 clat (usec): min=60, max=316, avg=111.40, stdev=17.15 00:10:26.522 lat (usec): min=69, max=325, avg=120.87, stdev=17.29 00:10:26.522 clat percentiles (usec): 00:10:26.522 | 1.00th=[ 74], 5.00th=[ 89], 10.00th=[ 93], 20.00th=[ 97], 00:10:26.522 | 30.00th=[ 101], 40.00th=[ 108], 50.00th=[ 112], 60.00th=[ 115], 00:10:26.522 | 70.00th=[ 120], 80.00th=[ 126], 90.00th=[ 133], 95.00th=[ 137], 00:10:26.522 | 99.00th=[ 159], 99.50th=[ 172], 99.90th=[ 210], 99.95th=[ 233], 00:10:26.522 | 99.99th=[ 318] 00:10:26.522 bw ( KiB/s): min=16384, max=16384, per=24.97%, avg=16384.00, stdev= 0.00, samples=1 00:10:26.522 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:26.522 lat (usec) : 100=19.78%, 250=80.20%, 500=0.02% 00:10:26.522 cpu : usr=4.50%, sys=6.20%, ctx=8051, majf=0, minf=1 00:10:26.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.522 issued rwts: total=3954,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.522 job2: (groupid=0, jobs=1): err= 0: pid=672773: Tue Dec 10 03:58:20 2024 00:10:26.522 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:10:26.522 slat (nsec): min=6387, max=28133, avg=7199.16, stdev=819.95 00:10:26.522 clat (usec): min=72, max=295, avg=110.67, stdev=21.71 00:10:26.522 lat (usec): min=79, max=303, avg=117.87, stdev=21.75 00:10:26.522 clat percentiles (usec): 00:10:26.522 | 1.00th=[ 77], 5.00th=[ 81], 10.00th=[ 84], 20.00th=[ 87], 00:10:26.522 | 30.00th=[ 92], 40.00th=[ 111], 50.00th=[ 115], 60.00th=[ 118], 00:10:26.522 | 70.00th=[ 123], 80.00th=[ 129], 90.00th=[ 135], 95.00th=[ 141], 00:10:26.522 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 206], 99.95th=[ 229], 00:10:26.522 | 99.99th=[ 297] 00:10:26.522 write: IOPS=4340, BW=17.0MiB/s (17.8MB/s)(17.0MiB/1001msec); 0 zone resets 00:10:26.522 slat (nsec): min=8589, max=37109, avg=9584.94, stdev=876.49 00:10:26.522 clat (usec): min=57, max=280, avg=105.46, stdev=21.09 00:10:26.522 lat (usec): min=66, max=290, avg=115.05, stdev=21.18 00:10:26.522 clat percentiles (usec): 00:10:26.522 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 80], 20.00th=[ 83], 00:10:26.522 | 30.00th=[ 87], 40.00th=[ 104], 50.00th=[ 112], 60.00th=[ 116], 00:10:26.522 | 70.00th=[ 119], 80.00th=[ 122], 90.00th=[ 128], 95.00th=[ 133], 00:10:26.522 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 202], 99.95th=[ 229], 00:10:26.522 | 99.99th=[ 281] 00:10:26.522 bw ( KiB/s): min=19472, max=19472, per=29.68%, avg=19472.00, stdev= 0.00, samples=1 00:10:26.522 iops : min= 4868, max= 4868, avg=4868.00, stdev= 0.00, samples=1 00:10:26.522 lat (usec) : 100=37.41%, 250=62.55%, 500=0.04% 00:10:26.522 cpu : usr=3.10%, sys=8.10%, ctx=8441, majf=0, minf=1 00:10:26.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.522 issued rwts: total=4096,4345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.522 job3: (groupid=0, jobs=1): err= 0: pid=672778: Tue Dec 10 03:58:20 2024 00:10:26.522 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:26.522 slat (nsec): min=6415, max=28530, avg=7505.20, stdev=954.41 00:10:26.522 clat (usec): min=74, max=413, avg=125.62, stdev=16.84 00:10:26.522 lat (usec): min=81, max=420, avg=133.12, stdev=16.89 00:10:26.522 clat percentiles (usec): 00:10:26.522 | 1.00th=[ 89], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 113], 00:10:26.522 | 30.00th=[ 116], 40.00th=[ 120], 50.00th=[ 125], 60.00th=[ 130], 00:10:26.522 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 151], 00:10:26.522 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 212], 99.95th=[ 233], 00:10:26.522 | 99.99th=[ 412] 00:10:26.522 write: IOPS=3879, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1001msec); 0 zone resets 00:10:26.522 slat (nsec): min=8657, max=36046, avg=9819.56, stdev=1095.43 00:10:26.522 clat (usec): min=68, max=297, avg=120.77, stdev=15.25 00:10:26.522 lat (usec): min=77, max=307, avg=130.59, stdev=15.25 00:10:26.522 clat percentiles (usec): 00:10:26.522 | 1.00th=[ 84], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 112], 00:10:26.522 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 120], 60.00th=[ 123], 00:10:26.522 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 137], 95.00th=[ 145], 00:10:26.523 | 99.00th=[ 172], 99.50th=[ 180], 99.90th=[ 225], 99.95th=[ 229], 00:10:26.523 | 99.99th=[ 297] 00:10:26.523 bw ( KiB/s): min=16024, max=16024, per=24.42%, avg=16024.00, stdev= 0.00, samples=1 00:10:26.523 iops : min= 4006, max= 4006, avg=4006.00, stdev= 0.00, samples=1 00:10:26.523 lat (usec) : 100=3.40%, 250=96.57%, 500=0.03% 00:10:26.523 cpu : usr=3.50%, sys=6.70%, ctx=7467, majf=0, minf=1 00:10:26.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.523 issued rwts: total=3584,3883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.523 00:10:26.523 Run status group 0 (all jobs): 00:10:26.523 READ: bw=60.5MiB/s (63.4MB/s), 14.0MiB/s-16.0MiB/s (14.7MB/s-16.8MB/s), io=60.5MiB (63.5MB), run=1001-1001msec 00:10:26.523 WRITE: bw=64.1MiB/s (67.2MB/s), 15.2MiB/s-17.0MiB/s (15.9MB/s-17.8MB/s), io=64.1MiB (67.3MB), run=1001-1001msec 00:10:26.523 00:10:26.523 Disk stats (read/write): 00:10:26.523 nvme0n1: ios=3334/3584, merge=0/0, ticks=370/382, in_queue=752, util=87.07% 00:10:26.523 nvme0n2: ios=3354/3584, merge=0/0, ticks=383/388, in_queue=771, util=87.54% 00:10:26.523 nvme0n3: ios=3584/3764, merge=0/0, ticks=387/390, in_queue=777, util=89.26% 00:10:26.523 nvme0n4: ios=3072/3283, merge=0/0, ticks=389/384, in_queue=773, util=89.81% 00:10:26.523 03:58:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:26.523 [global] 00:10:26.523 thread=1 00:10:26.523 invalidate=1 00:10:26.523 rw=randwrite 00:10:26.523 time_based=1 00:10:26.523 runtime=1 00:10:26.523 ioengine=libaio 00:10:26.523 direct=1 00:10:26.523 bs=4096 00:10:26.523 iodepth=1 00:10:26.523 norandommap=0 00:10:26.523 numjobs=1 00:10:26.523 00:10:26.523 verify_dump=1 00:10:26.523 verify_backlog=512 00:10:26.523 verify_state_save=0 00:10:26.523 do_verify=1 00:10:26.523 verify=crc32c-intel 00:10:26.523 [job0] 00:10:26.523 filename=/dev/nvme0n1 00:10:26.523 [job1] 00:10:26.523 filename=/dev/nvme0n2 00:10:26.523 [job2] 00:10:26.523 filename=/dev/nvme0n3 00:10:26.523 [job3] 00:10:26.523 filename=/dev/nvme0n4 00:10:26.523 Could not set queue depth (nvme0n1) 00:10:26.523 Could not set queue depth (nvme0n2) 00:10:26.523 Could not set queue depth (nvme0n3) 00:10:26.523 Could not set queue depth (nvme0n4) 00:10:26.523 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.523 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.523 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.523 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.523 fio-3.35 00:10:26.523 Starting 4 threads 00:10:27.901 00:10:27.901 job0: (groupid=0, jobs=1): err= 0: pid=673208: Tue Dec 10 03:58:22 2024 00:10:27.901 read: IOPS=5602, BW=21.9MiB/s (22.9MB/s)(21.9MiB/1001msec) 00:10:27.901 slat (nsec): min=6059, max=26143, avg=6801.84, stdev=714.80 00:10:27.901 clat (usec): min=62, max=257, avg=80.05, stdev= 9.97 00:10:27.901 lat (usec): min=69, max=264, avg=86.85, stdev=10.02 00:10:27.901 clat percentiles (usec): 00:10:27.901 | 1.00th=[ 69], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:10:27.901 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 80], 00:10:27.901 | 70.00th=[ 82], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 92], 00:10:27.901 | 99.00th=[ 126], 99.50th=[ 135], 99.90th=[ 165], 99.95th=[ 174], 00:10:27.901 | 99.99th=[ 258] 00:10:27.901 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:10:27.901 slat (nsec): min=7677, max=39133, avg=8810.97, stdev=1014.59 00:10:27.901 clat (usec): min=60, max=237, avg=78.34, stdev=13.56 00:10:27.901 lat (usec): min=69, max=246, avg=87.15, stdev=13.64 00:10:27.901 clat percentiles (usec): 00:10:27.901 | 1.00th=[ 64], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:10:27.901 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 76], 60.00th=[ 77], 00:10:27.901 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 89], 95.00th=[ 111], 00:10:27.901 | 99.00th=[ 141], 99.50th=[ 149], 99.90th=[ 159], 99.95th=[ 161], 00:10:27.901 | 99.99th=[ 237] 00:10:27.901 bw ( KiB/s): min=23360, max=23360, per=31.60%, avg=23360.00, stdev= 0.00, samples=1 00:10:27.901 iops : min= 5840, max= 5840, avg=5840.00, stdev= 0.00, samples=1 00:10:27.901 lat (usec) : 100=94.81%, 250=5.18%, 500=0.01% 00:10:27.901 cpu : usr=4.80%, sys=9.50%, ctx=11240, majf=0, minf=1 00:10:27.901 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.901 issued rwts: total=5608,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.901 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.901 job1: (groupid=0, jobs=1): err= 0: pid=673221: Tue Dec 10 03:58:22 2024 00:10:27.901 read: IOPS=3537, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1001msec) 00:10:27.901 slat (nsec): min=6122, max=28286, avg=7136.83, stdev=919.18 00:10:27.901 clat (usec): min=67, max=520, avg=134.57, stdev=24.68 00:10:27.901 lat (usec): min=74, max=527, avg=141.71, stdev=24.69 00:10:27.901 clat percentiles (usec): 00:10:27.901 | 1.00th=[ 84], 5.00th=[ 94], 10.00th=[ 114], 20.00th=[ 125], 00:10:27.901 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:10:27.901 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 174], 00:10:27.901 | 99.00th=[ 204], 99.50th=[ 249], 99.90th=[ 396], 99.95th=[ 490], 00:10:27.901 | 99.99th=[ 523] 00:10:27.901 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:27.901 slat (nsec): min=7753, max=35038, avg=9176.55, stdev=880.03 00:10:27.901 clat (usec): min=57, max=449, avg=125.52, stdev=24.53 00:10:27.901 lat (usec): min=65, max=459, avg=134.70, stdev=24.55 00:10:27.901 clat percentiles (usec): 00:10:27.901 | 1.00th=[ 75], 5.00th=[ 85], 10.00th=[ 100], 20.00th=[ 114], 00:10:27.901 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 129], 00:10:27.901 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 165], 00:10:27.901 | 99.00th=[ 192], 99.50th=[ 251], 99.90th=[ 322], 99.95th=[ 416], 00:10:27.901 | 99.99th=[ 449] 00:10:27.901 bw ( KiB/s): min=16384, max=16384, per=22.16%, avg=16384.00, stdev= 0.00, samples=1 00:10:27.901 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:27.901 lat (usec) : 100=8.34%, 250=91.17%, 500=0.48%, 750=0.01% 00:10:27.901 cpu : usr=3.30%, sys=6.20%, ctx=7125, majf=0, minf=1 00:10:27.901 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.901 issued rwts: total=3541,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.901 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.901 job2: (groupid=0, jobs=1): err= 0: pid=673244: Tue Dec 10 03:58:22 2024 00:10:27.901 read: IOPS=5361, BW=20.9MiB/s (22.0MB/s)(21.0MiB/1001msec) 00:10:27.901 slat (nsec): min=6198, max=16454, avg=7044.64, stdev=618.76 00:10:27.901 clat (usec): min=66, max=115, avg=83.41, stdev= 5.78 00:10:27.901 lat (usec): min=73, max=122, avg=90.46, stdev= 5.83 00:10:27.901 clat percentiles (usec): 00:10:27.901 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 79], 00:10:27.901 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 83], 60.00th=[ 85], 00:10:27.901 | 70.00th=[ 86], 80.00th=[ 88], 90.00th=[ 91], 95.00th=[ 94], 00:10:27.901 | 99.00th=[ 100], 99.50th=[ 103], 99.90th=[ 110], 99.95th=[ 112], 00:10:27.901 | 99.99th=[ 116] 00:10:27.901 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:10:27.901 slat (nsec): min=8106, max=38225, avg=8915.04, stdev=951.00 00:10:27.901 clat (usec): min=61, max=301, avg=78.67, stdev= 6.36 00:10:27.901 lat (usec): min=70, max=310, avg=87.58, stdev= 6.48 00:10:27.901 clat percentiles (usec): 00:10:27.901 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 75], 00:10:27.901 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 79], 60.00th=[ 80], 00:10:27.901 | 70.00th=[ 81], 80.00th=[ 83], 90.00th=[ 86], 95.00th=[ 90], 00:10:27.901 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 102], 99.95th=[ 112], 00:10:27.901 | 99.99th=[ 302] 00:10:27.901 bw ( KiB/s): min=23544, max=23544, per=31.85%, avg=23544.00, stdev= 0.00, samples=1 00:10:27.901 iops : min= 5886, max= 5886, avg=5886.00, stdev= 0.00, samples=1 00:10:27.901 lat (usec) : 100=99.40%, 250=0.59%, 500=0.01% 00:10:27.901 cpu : usr=5.10%, sys=8.90%, ctx=10999, majf=0, minf=1 00:10:27.901 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.901 issued rwts: total=5367,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.901 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.901 job3: (groupid=0, jobs=1): err= 0: pid=673253: Tue Dec 10 03:58:22 2024 00:10:27.901 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:27.901 slat (nsec): min=6201, max=17420, avg=7302.97, stdev=684.02 00:10:27.901 clat (usec): min=77, max=515, avg=132.20, stdev=22.99 00:10:27.901 lat (usec): min=87, max=523, avg=139.50, stdev=23.00 00:10:27.901 clat percentiles (usec): 00:10:27.901 | 1.00th=[ 86], 5.00th=[ 93], 10.00th=[ 101], 20.00th=[ 123], 00:10:27.901 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:10:27.901 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 165], 00:10:27.901 | 99.00th=[ 192], 99.50th=[ 231], 99.90th=[ 351], 99.95th=[ 429], 00:10:27.901 | 99.99th=[ 515] 00:10:27.901 write: IOPS=3648, BW=14.3MiB/s (14.9MB/s)(14.3MiB/1001msec); 0 zone resets 00:10:27.901 slat (nsec): min=8399, max=36828, avg=9419.85, stdev=1074.91 00:10:27.901 clat (usec): min=74, max=491, avg=123.16, stdev=25.84 00:10:27.901 lat (usec): min=83, max=500, avg=132.58, stdev=25.87 00:10:27.901 clat percentiles (usec): 00:10:27.901 | 1.00th=[ 80], 5.00th=[ 85], 10.00th=[ 89], 20.00th=[ 111], 00:10:27.901 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 128], 00:10:27.901 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 143], 95.00th=[ 159], 00:10:27.901 | 99.00th=[ 202], 99.50th=[ 237], 99.90th=[ 424], 99.95th=[ 474], 00:10:27.901 | 99.99th=[ 490] 00:10:27.901 bw ( KiB/s): min=16384, max=16384, per=22.16%, avg=16384.00, stdev= 0.00, samples=1 00:10:27.901 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:27.901 lat (usec) : 100=13.63%, 250=85.97%, 500=0.39%, 750=0.01% 00:10:27.901 cpu : usr=3.50%, sys=6.20%, ctx=7236, majf=0, minf=1 00:10:27.901 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.901 issued rwts: total=3584,3652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.901 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.901 00:10:27.901 Run status group 0 (all jobs): 00:10:27.901 READ: bw=70.6MiB/s (74.1MB/s), 13.8MiB/s-21.9MiB/s (14.5MB/s-22.9MB/s), io=70.7MiB (74.1MB), run=1001-1001msec 00:10:27.901 WRITE: bw=72.2MiB/s (75.7MB/s), 14.0MiB/s-22.0MiB/s (14.7MB/s-23.0MB/s), io=72.3MiB (75.8MB), run=1001-1001msec 00:10:27.901 00:10:27.901 Disk stats (read/write): 00:10:27.901 nvme0n1: ios=4658/4699, merge=0/0, ticks=364/348, in_queue=712, util=84.47% 00:10:27.901 nvme0n2: ios=2880/3072, merge=0/0, ticks=369/377, in_queue=746, util=85.19% 00:10:27.901 nvme0n3: ios=4548/4608, merge=0/0, ticks=373/343, in_queue=716, util=88.45% 00:10:27.901 nvme0n4: ios=2994/3072, merge=0/0, ticks=390/361, in_queue=751, util=89.50% 00:10:27.901 03:58:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:27.901 [global] 00:10:27.901 thread=1 00:10:27.901 invalidate=1 00:10:27.901 rw=write 00:10:27.901 time_based=1 00:10:27.901 runtime=1 00:10:27.901 ioengine=libaio 00:10:27.901 direct=1 00:10:27.901 bs=4096 00:10:27.901 iodepth=128 00:10:27.901 norandommap=0 00:10:27.901 numjobs=1 00:10:27.901 00:10:27.901 verify_dump=1 00:10:27.901 verify_backlog=512 00:10:27.901 verify_state_save=0 00:10:27.901 do_verify=1 00:10:27.901 verify=crc32c-intel 00:10:27.901 [job0] 00:10:27.901 filename=/dev/nvme0n1 00:10:27.901 [job1] 00:10:27.901 filename=/dev/nvme0n2 00:10:27.901 [job2] 00:10:27.901 filename=/dev/nvme0n3 00:10:27.901 [job3] 00:10:27.901 filename=/dev/nvme0n4 00:10:27.901 Could not set queue depth (nvme0n1) 00:10:27.901 Could not set queue depth (nvme0n2) 00:10:27.901 Could not set queue depth (nvme0n3) 00:10:27.901 Could not set queue depth (nvme0n4) 00:10:28.160 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.160 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.160 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.160 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.160 fio-3.35 00:10:28.160 Starting 4 threads 00:10:29.560 00:10:29.561 job0: (groupid=0, jobs=1): err= 0: pid=673689: Tue Dec 10 03:58:23 2024 00:10:29.561 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:10:29.561 slat (nsec): min=1250, max=5484.9k, avg=86199.40, stdev=422971.52 00:10:29.561 clat (usec): min=3545, max=21017, avg=11334.09, stdev=3045.84 00:10:29.561 lat (usec): min=3568, max=21039, avg=11420.29, stdev=3053.24 00:10:29.561 clat percentiles (usec): 00:10:29.561 | 1.00th=[ 4686], 5.00th=[ 6652], 10.00th=[ 7504], 20.00th=[ 8848], 00:10:29.561 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11338], 60.00th=[12125], 00:10:29.561 | 70.00th=[12518], 80.00th=[13304], 90.00th=[15139], 95.00th=[16909], 00:10:29.561 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:10:29.561 | 99.99th=[21103] 00:10:29.561 write: IOPS=5902, BW=23.1MiB/s (24.2MB/s)(23.1MiB/1001msec); 0 zone resets 00:10:29.561 slat (nsec): min=1779, max=5629.5k, avg=83709.03, stdev=372030.22 00:10:29.561 clat (usec): min=424, max=19902, avg=10632.06, stdev=3336.37 00:10:29.561 lat (usec): min=880, max=20344, avg=10715.76, stdev=3344.30 00:10:29.561 clat percentiles (usec): 00:10:29.561 | 1.00th=[ 3654], 5.00th=[ 4621], 10.00th=[ 6063], 20.00th=[ 7898], 00:10:29.561 | 30.00th=[ 8979], 40.00th=[ 9765], 50.00th=[10945], 60.00th=[11469], 00:10:29.561 | 70.00th=[12387], 80.00th=[13304], 90.00th=[15008], 95.00th=[16450], 00:10:29.561 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:10:29.561 | 99.99th=[19792] 00:10:29.561 bw ( KiB/s): min=24576, max=24576, per=24.50%, avg=24576.00, stdev= 0.00, samples=1 00:10:29.561 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:10:29.561 lat (usec) : 500=0.01%, 1000=0.02% 00:10:29.561 lat (msec) : 4=1.06%, 10=37.09%, 20=61.30%, 50=0.53% 00:10:29.561 cpu : usr=2.90%, sys=4.10%, ctx=1963, majf=0, minf=1 00:10:29.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:29.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.561 issued rwts: total=5632,5908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.561 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.561 job1: (groupid=0, jobs=1): err= 0: pid=673703: Tue Dec 10 03:58:23 2024 00:10:29.561 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:10:29.561 slat (nsec): min=1251, max=4294.6k, avg=84457.83, stdev=388888.49 00:10:29.561 clat (usec): min=3701, max=18563, avg=10987.79, stdev=2950.05 00:10:29.561 lat (usec): min=3707, max=18588, avg=11072.25, stdev=2956.54 00:10:29.561 clat percentiles (usec): 00:10:29.561 | 1.00th=[ 4178], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 8225], 00:10:29.561 | 30.00th=[ 9241], 40.00th=[10421], 50.00th=[11207], 60.00th=[11863], 00:10:29.561 | 70.00th=[12518], 80.00th=[13435], 90.00th=[15008], 95.00th=[15926], 00:10:29.561 | 99.00th=[17433], 99.50th=[17957], 99.90th=[18482], 99.95th=[18482], 00:10:29.561 | 99.99th=[18482] 00:10:29.561 write: IOPS=5913, BW=23.1MiB/s (24.2MB/s)(23.2MiB/1003msec); 0 zone resets 00:10:29.561 slat (nsec): min=1745, max=4936.8k, avg=85119.31, stdev=378185.95 00:10:29.561 clat (usec): min=1607, max=20172, avg=10990.30, stdev=3182.80 00:10:29.561 lat (usec): min=3236, max=20416, avg=11075.42, stdev=3191.12 00:10:29.561 clat percentiles (usec): 00:10:29.561 | 1.00th=[ 3589], 5.00th=[ 4883], 10.00th=[ 6259], 20.00th=[ 8717], 00:10:29.561 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11207], 60.00th=[12125], 00:10:29.561 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14484], 95.00th=[15533], 00:10:29.561 | 99.00th=[19006], 99.50th=[19530], 99.90th=[20055], 99.95th=[20055], 00:10:29.561 | 99.99th=[20055] 00:10:29.561 bw ( KiB/s): min=22248, max=24184, per=23.15%, avg=23216.00, stdev=1368.96, samples=2 00:10:29.561 iops : min= 5562, max= 6046, avg=5804.00, stdev=342.24, samples=2 00:10:29.561 lat (msec) : 2=0.01%, 4=1.44%, 10=32.86%, 20=65.61%, 50=0.08% 00:10:29.561 cpu : usr=2.69%, sys=4.39%, ctx=1927, majf=0, minf=1 00:10:29.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:29.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.561 issued rwts: total=5632,5931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.561 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.561 job2: (groupid=0, jobs=1): err= 0: pid=673706: Tue Dec 10 03:58:23 2024 00:10:29.561 read: IOPS=7206, BW=28.1MiB/s (29.5MB/s)(28.2MiB/1003msec) 00:10:29.561 slat (nsec): min=1246, max=4039.3k, avg=67052.73, stdev=307644.10 00:10:29.561 clat (usec): min=560, max=20386, avg=8816.42, stdev=2618.84 00:10:29.561 lat (usec): min=3483, max=20395, avg=8883.48, stdev=2628.68 00:10:29.561 clat percentiles (usec): 00:10:29.561 | 1.00th=[ 4293], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 6718], 00:10:29.561 | 30.00th=[ 7242], 40.00th=[ 7701], 50.00th=[ 8160], 60.00th=[ 8717], 00:10:29.561 | 70.00th=[ 9765], 80.00th=[10814], 90.00th=[12518], 95.00th=[14353], 00:10:29.561 | 99.00th=[16450], 99.50th=[16581], 99.90th=[19792], 99.95th=[19792], 00:10:29.561 | 99.99th=[20317] 00:10:29.561 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec); 0 zone resets 00:10:29.561 slat (nsec): min=1768, max=4652.6k, avg=63850.02, stdev=290314.19 00:10:29.561 clat (usec): min=2905, max=17765, avg=8227.42, stdev=2580.01 00:10:29.561 lat (usec): min=2908, max=17777, avg=8291.27, stdev=2593.49 00:10:29.561 clat percentiles (usec): 00:10:29.561 | 1.00th=[ 4293], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6259], 00:10:29.561 | 30.00th=[ 6718], 40.00th=[ 7111], 50.00th=[ 7439], 60.00th=[ 7898], 00:10:29.561 | 70.00th=[ 8848], 80.00th=[10159], 90.00th=[12125], 95.00th=[13698], 00:10:29.561 | 99.00th=[15795], 99.50th=[16581], 99.90th=[17171], 99.95th=[17171], 00:10:29.561 | 99.99th=[17695] 00:10:29.561 bw ( KiB/s): min=29736, max=31160, per=30.36%, avg=30448.00, stdev=1006.92, samples=2 00:10:29.561 iops : min= 7434, max= 7790, avg=7612.00, stdev=251.73, samples=2 00:10:29.561 lat (usec) : 750=0.01% 00:10:29.561 lat (msec) : 4=0.58%, 10=74.98%, 20=24.42%, 50=0.01% 00:10:29.561 cpu : usr=4.29%, sys=4.59%, ctx=1638, majf=0, minf=1 00:10:29.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:29.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.561 issued rwts: total=7228,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.561 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.561 job3: (groupid=0, jobs=1): err= 0: pid=673708: Tue Dec 10 03:58:23 2024 00:10:29.561 read: IOPS=5533, BW=21.6MiB/s (22.7MB/s)(21.7MiB/1003msec) 00:10:29.561 slat (nsec): min=1272, max=4318.3k, avg=86544.71, stdev=356975.46 00:10:29.561 clat (usec): min=2262, max=21251, avg=11195.63, stdev=3081.56 00:10:29.561 lat (usec): min=4134, max=21259, avg=11282.18, stdev=3093.33 00:10:29.561 clat percentiles (usec): 00:10:29.561 | 1.00th=[ 5276], 5.00th=[ 6259], 10.00th=[ 6849], 20.00th=[ 8160], 00:10:29.561 | 30.00th=[ 9372], 40.00th=[10552], 50.00th=[11469], 60.00th=[12125], 00:10:29.561 | 70.00th=[12911], 80.00th=[13829], 90.00th=[15008], 95.00th=[16581], 00:10:29.561 | 99.00th=[19268], 99.50th=[19792], 99.90th=[21365], 99.95th=[21365], 00:10:29.561 | 99.99th=[21365] 00:10:29.561 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:29.561 slat (nsec): min=1877, max=4614.5k, avg=88411.76, stdev=335497.06 00:10:29.561 clat (usec): min=3515, max=22713, avg=11486.39, stdev=3793.57 00:10:29.561 lat (usec): min=3993, max=22716, avg=11574.80, stdev=3812.74 00:10:29.561 clat percentiles (usec): 00:10:29.561 | 1.00th=[ 4752], 5.00th=[ 6325], 10.00th=[ 7177], 20.00th=[ 7832], 00:10:29.561 | 30.00th=[ 8455], 40.00th=[10028], 50.00th=[11338], 60.00th=[12518], 00:10:29.561 | 70.00th=[13435], 80.00th=[14746], 90.00th=[16712], 95.00th=[18220], 00:10:29.561 | 99.00th=[21365], 99.50th=[21890], 99.90th=[22152], 99.95th=[22676], 00:10:29.561 | 99.99th=[22676] 00:10:29.561 bw ( KiB/s): min=21848, max=23208, per=22.46%, avg=22528.00, stdev=961.67, samples=2 00:10:29.561 iops : min= 5462, max= 5802, avg=5632.00, stdev=240.42, samples=2 00:10:29.561 lat (msec) : 4=0.07%, 10=37.24%, 20=61.77%, 50=0.92% 00:10:29.561 cpu : usr=2.79%, sys=5.09%, ctx=1744, majf=0, minf=1 00:10:29.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:29.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.561 issued rwts: total=5550,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.561 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.561 00:10:29.561 Run status group 0 (all jobs): 00:10:29.561 READ: bw=93.6MiB/s (98.2MB/s), 21.6MiB/s-28.1MiB/s (22.7MB/s-29.5MB/s), io=93.9MiB (98.5MB), run=1001-1003msec 00:10:29.561 WRITE: bw=98.0MiB/s (103MB/s), 21.9MiB/s-29.9MiB/s (23.0MB/s-31.4MB/s), io=98.2MiB (103MB), run=1001-1003msec 00:10:29.561 00:10:29.561 Disk stats (read/write): 00:10:29.561 nvme0n1: ios=4887/5120, merge=0/0, ticks=15828/16210, in_queue=32038, util=86.17% 00:10:29.561 nvme0n2: ios=4796/5120, merge=0/0, ticks=15035/16224, in_queue=31259, util=86.83% 00:10:29.561 nvme0n3: ios=6227/6656, merge=0/0, ticks=15305/15908, in_queue=31213, util=88.43% 00:10:29.561 nvme0n4: ios=4705/5120, merge=0/0, ticks=15066/15606, in_queue=30672, util=89.50% 00:10:29.561 03:58:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:29.561 [global] 00:10:29.561 thread=1 00:10:29.561 invalidate=1 00:10:29.561 rw=randwrite 00:10:29.561 time_based=1 00:10:29.561 runtime=1 00:10:29.561 ioengine=libaio 00:10:29.561 direct=1 00:10:29.561 bs=4096 00:10:29.561 iodepth=128 00:10:29.561 norandommap=0 00:10:29.561 numjobs=1 00:10:29.561 00:10:29.561 verify_dump=1 00:10:29.561 verify_backlog=512 00:10:29.561 verify_state_save=0 00:10:29.561 do_verify=1 00:10:29.561 verify=crc32c-intel 00:10:29.561 [job0] 00:10:29.561 filename=/dev/nvme0n1 00:10:29.561 [job1] 00:10:29.561 filename=/dev/nvme0n2 00:10:29.561 [job2] 00:10:29.561 filename=/dev/nvme0n3 00:10:29.561 [job3] 00:10:29.561 filename=/dev/nvme0n4 00:10:29.561 Could not set queue depth (nvme0n1) 00:10:29.561 Could not set queue depth (nvme0n2) 00:10:29.561 Could not set queue depth (nvme0n3) 00:10:29.561 Could not set queue depth (nvme0n4) 00:10:29.830 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.830 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.830 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.830 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.830 fio-3.35 00:10:29.830 Starting 4 threads 00:10:31.203 00:10:31.203 job0: (groupid=0, jobs=1): err= 0: pid=674131: Tue Dec 10 03:58:25 2024 00:10:31.203 read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec) 00:10:31.203 slat (nsec): min=1247, max=4480.7k, avg=70139.42, stdev=297836.75 00:10:31.203 clat (usec): min=2562, max=18750, avg=9118.46, stdev=2849.18 00:10:31.203 lat (usec): min=2611, max=18755, avg=9188.60, stdev=2865.96 00:10:31.203 clat percentiles (usec): 00:10:31.203 | 1.00th=[ 4047], 5.00th=[ 5342], 10.00th=[ 6128], 20.00th=[ 6718], 00:10:31.203 | 30.00th=[ 6915], 40.00th=[ 7635], 50.00th=[ 8586], 60.00th=[ 9503], 00:10:31.203 | 70.00th=[10945], 80.00th=[11731], 90.00th=[12911], 95.00th=[14222], 00:10:31.203 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:10:31.203 | 99.99th=[18744] 00:10:31.203 write: IOPS=7270, BW=28.4MiB/s (29.8MB/s)(28.5MiB/1004msec); 0 zone resets 00:10:31.203 slat (nsec): min=1667, max=6187.2k, avg=65230.56, stdev=281160.74 00:10:31.203 clat (usec): min=1909, max=20087, avg=8473.90, stdev=2615.99 00:10:31.203 lat (usec): min=3036, max=20090, avg=8539.13, stdev=2629.02 00:10:31.203 clat percentiles (usec): 00:10:31.203 | 1.00th=[ 3752], 5.00th=[ 5014], 10.00th=[ 5932], 20.00th=[ 6325], 00:10:31.203 | 30.00th=[ 6652], 40.00th=[ 7046], 50.00th=[ 7898], 60.00th=[ 8848], 00:10:31.203 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11600], 95.00th=[13173], 00:10:31.203 | 99.00th=[15795], 99.50th=[16188], 99.90th=[20055], 99.95th=[20055], 00:10:31.203 | 99.99th=[20055] 00:10:31.203 bw ( KiB/s): min=25064, max=32312, per=27.63%, avg=28688.00, stdev=5125.11, samples=2 00:10:31.203 iops : min= 6266, max= 8078, avg=7172.00, stdev=1281.28, samples=2 00:10:31.203 lat (msec) : 2=0.01%, 4=1.22%, 10=66.06%, 20=32.55%, 50=0.17% 00:10:31.203 cpu : usr=2.99%, sys=4.69%, ctx=1771, majf=0, minf=1 00:10:31.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:31.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.203 issued rwts: total=7168,7300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.203 job1: (groupid=0, jobs=1): err= 0: pid=674132: Tue Dec 10 03:58:25 2024 00:10:31.203 read: IOPS=6313, BW=24.7MiB/s (25.9MB/s)(24.8MiB/1004msec) 00:10:31.203 slat (nsec): min=1214, max=4048.5k, avg=78830.31, stdev=375661.34 00:10:31.203 clat (usec): min=2179, max=19257, avg=10168.25, stdev=3146.53 00:10:31.203 lat (usec): min=3736, max=19262, avg=10247.08, stdev=3152.47 00:10:31.203 clat percentiles (usec): 00:10:31.203 | 1.00th=[ 4228], 5.00th=[ 5932], 10.00th=[ 6325], 20.00th=[ 6849], 00:10:31.203 | 30.00th=[ 7308], 40.00th=[ 8717], 50.00th=[10945], 60.00th=[11600], 00:10:31.203 | 70.00th=[12649], 80.00th=[13173], 90.00th=[13829], 95.00th=[14222], 00:10:31.203 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:10:31.203 | 99.99th=[19268] 00:10:31.203 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:10:31.203 slat (nsec): min=1727, max=3988.4k, avg=72490.40, stdev=341582.47 00:10:31.203 clat (usec): min=2856, max=18288, avg=9419.79, stdev=3265.32 00:10:31.203 lat (usec): min=2858, max=20432, avg=9492.28, stdev=3275.46 00:10:31.203 clat percentiles (usec): 00:10:31.203 | 1.00th=[ 4015], 5.00th=[ 5538], 10.00th=[ 5866], 20.00th=[ 6390], 00:10:31.203 | 30.00th=[ 6587], 40.00th=[ 6980], 50.00th=[ 8848], 60.00th=[11076], 00:10:31.203 | 70.00th=[12125], 80.00th=[12911], 90.00th=[13566], 95.00th=[14091], 00:10:31.203 | 99.00th=[16319], 99.50th=[17171], 99.90th=[17433], 99.95th=[18220], 00:10:31.203 | 99.99th=[18220] 00:10:31.203 bw ( KiB/s): min=24576, max=28672, per=25.64%, avg=26624.00, stdev=2896.31, samples=2 00:10:31.203 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:10:31.203 lat (msec) : 4=0.85%, 10=49.28%, 20=49.87% 00:10:31.203 cpu : usr=2.99%, sys=3.39%, ctx=1493, majf=0, minf=1 00:10:31.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:31.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.203 issued rwts: total=6339,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.203 job2: (groupid=0, jobs=1): err= 0: pid=674134: Tue Dec 10 03:58:25 2024 00:10:31.203 read: IOPS=5235, BW=20.5MiB/s (21.4MB/s)(20.5MiB/1003msec) 00:10:31.203 slat (nsec): min=1259, max=4065.9k, avg=92129.99, stdev=445811.96 00:10:31.203 clat (usec): min=1855, max=20121, avg=11697.96, stdev=2829.37 00:10:31.203 lat (usec): min=2697, max=20124, avg=11790.09, stdev=2818.16 00:10:31.203 clat percentiles (usec): 00:10:31.203 | 1.00th=[ 4948], 5.00th=[ 6915], 10.00th=[ 7504], 20.00th=[ 8586], 00:10:31.203 | 30.00th=[10290], 40.00th=[11600], 50.00th=[12256], 60.00th=[13042], 00:10:31.203 | 70.00th=[13435], 80.00th=[13960], 90.00th=[14615], 95.00th=[15270], 00:10:31.203 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:10:31.203 | 99.99th=[20055] 00:10:31.203 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:31.203 slat (nsec): min=1751, max=4560.8k, avg=88533.29, stdev=431814.79 00:10:31.203 clat (usec): min=3696, max=19402, avg=11586.10, stdev=2759.00 00:10:31.203 lat (usec): min=4192, max=19405, avg=11674.63, stdev=2751.41 00:10:31.203 clat percentiles (usec): 00:10:31.203 | 1.00th=[ 4883], 5.00th=[ 6194], 10.00th=[ 7308], 20.00th=[ 9372], 00:10:31.203 | 30.00th=[10552], 40.00th=[11338], 50.00th=[12125], 60.00th=[12649], 00:10:31.203 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14484], 95.00th=[15401], 00:10:31.203 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19006], 99.95th=[19530], 00:10:31.203 | 99.99th=[19530] 00:10:31.203 bw ( KiB/s): min=20480, max=24576, per=21.70%, avg=22528.00, stdev=2896.31, samples=2 00:10:31.203 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:10:31.203 lat (msec) : 2=0.01%, 4=0.06%, 10=26.45%, 20=73.46%, 50=0.01% 00:10:31.203 cpu : usr=2.20%, sys=3.79%, ctx=1696, majf=0, minf=1 00:10:31.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:31.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.203 issued rwts: total=5251,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.203 job3: (groupid=0, jobs=1): err= 0: pid=674135: Tue Dec 10 03:58:25 2024 00:10:31.203 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:10:31.203 slat (nsec): min=1298, max=5167.0k, avg=77632.16, stdev=344659.24 00:10:31.203 clat (usec): min=2950, max=22741, avg=9921.52, stdev=3104.13 00:10:31.203 lat (usec): min=2954, max=23097, avg=9999.15, stdev=3121.97 00:10:31.203 clat percentiles (usec): 00:10:31.203 | 1.00th=[ 4621], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 7373], 00:10:31.203 | 30.00th=[ 7963], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[10552], 00:10:31.203 | 70.00th=[11338], 80.00th=[11863], 90.00th=[13698], 95.00th=[15270], 00:10:31.203 | 99.00th=[21365], 99.50th=[21627], 99.90th=[22414], 99.95th=[22676], 00:10:31.203 | 99.99th=[22676] 00:10:31.203 write: IOPS=6449, BW=25.2MiB/s (26.4MB/s)(25.3MiB/1004msec); 0 zone resets 00:10:31.203 slat (nsec): min=1735, max=4484.1k, avg=77637.10, stdev=326038.97 00:10:31.203 clat (usec): min=2680, max=19122, avg=10181.53, stdev=2934.64 00:10:31.203 lat (usec): min=3414, max=19125, avg=10259.17, stdev=2947.55 00:10:31.203 clat percentiles (usec): 00:10:31.203 | 1.00th=[ 4752], 5.00th=[ 5932], 10.00th=[ 6652], 20.00th=[ 7570], 00:10:31.203 | 30.00th=[ 8291], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[10814], 00:10:31.203 | 70.00th=[11469], 80.00th=[12256], 90.00th=[14615], 95.00th=[15533], 00:10:31.203 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19006], 99.95th=[19006], 00:10:31.203 | 99.99th=[19006] 00:10:31.203 bw ( KiB/s): min=24144, max=26632, per=24.45%, avg=25388.00, stdev=1759.28, samples=2 00:10:31.203 iops : min= 6036, max= 6658, avg=6347.00, stdev=439.82, samples=2 00:10:31.203 lat (msec) : 4=0.32%, 10=52.14%, 20=47.01%, 50=0.52% 00:10:31.203 cpu : usr=2.59%, sys=4.49%, ctx=1558, majf=0, minf=1 00:10:31.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:31.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.203 issued rwts: total=6144,6475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.204 00:10:31.204 Run status group 0 (all jobs): 00:10:31.204 READ: bw=96.9MiB/s (102MB/s), 20.5MiB/s-27.9MiB/s (21.4MB/s-29.2MB/s), io=97.3MiB (102MB), run=1003-1004msec 00:10:31.204 WRITE: bw=101MiB/s (106MB/s), 21.9MiB/s-28.4MiB/s (23.0MB/s-29.8MB/s), io=102MiB (107MB), run=1003-1004msec 00:10:31.204 00:10:31.204 Disk stats (read/write): 00:10:31.204 nvme0n1: ios=6194/6368, merge=0/0, ticks=16863/15836, in_queue=32699, util=87.27% 00:10:31.204 nvme0n2: ios=5120/5353, merge=0/0, ticks=14449/13548, in_queue=27997, util=87.34% 00:10:31.204 nvme0n3: ios=4608/4674, merge=0/0, ticks=14222/13989, in_queue=28211, util=88.75% 00:10:31.204 nvme0n4: ios=5255/5632, merge=0/0, ticks=16476/17008, in_queue=33484, util=89.81% 00:10:31.204 03:58:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:31.204 03:58:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=674377 00:10:31.204 03:58:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:31.204 03:58:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:31.204 [global] 00:10:31.204 thread=1 00:10:31.204 invalidate=1 00:10:31.204 rw=read 00:10:31.204 time_based=1 00:10:31.204 runtime=10 00:10:31.204 ioengine=libaio 00:10:31.204 direct=1 00:10:31.204 bs=4096 00:10:31.204 iodepth=1 00:10:31.204 norandommap=1 00:10:31.204 numjobs=1 00:10:31.204 00:10:31.204 [job0] 00:10:31.204 filename=/dev/nvme0n1 00:10:31.204 [job1] 00:10:31.204 filename=/dev/nvme0n2 00:10:31.204 [job2] 00:10:31.204 filename=/dev/nvme0n3 00:10:31.204 [job3] 00:10:31.204 filename=/dev/nvme0n4 00:10:31.204 Could not set queue depth (nvme0n1) 00:10:31.204 Could not set queue depth (nvme0n2) 00:10:31.204 Could not set queue depth (nvme0n3) 00:10:31.204 Could not set queue depth (nvme0n4) 00:10:31.204 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.204 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.204 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.204 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.204 fio-3.35 00:10:31.204 Starting 4 threads 00:10:34.481 03:58:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:34.481 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=84377600, buflen=4096 00:10:34.481 fio: pid=674562, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.482 03:58:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:34.482 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=90984448, buflen=4096 00:10:34.482 fio: pid=674561, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.482 03:58:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.482 03:58:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:34.482 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=41476096, buflen=4096 00:10:34.482 fio: pid=674558, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.482 03:58:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.482 03:58:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:34.740 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54018048, buflen=4096 00:10:34.740 fio: pid=674560, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.740 03:58:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.740 03:58:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:34.740 00:10:34.740 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=674558: Tue Dec 10 03:58:29 2024 00:10:34.740 read: IOPS=8601, BW=33.6MiB/s (35.2MB/s)(104MiB/3082msec) 00:10:34.740 slat (usec): min=5, max=34819, avg= 9.36, stdev=240.55 00:10:34.740 clat (usec): min=46, max=226, avg=104.84, stdev=26.98 00:10:34.740 lat (usec): min=52, max=34899, avg=114.20, stdev=241.92 00:10:34.740 clat percentiles (usec): 00:10:34.740 | 1.00th=[ 55], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 80], 00:10:34.740 | 30.00th=[ 83], 40.00th=[ 87], 50.00th=[ 111], 60.00th=[ 121], 00:10:34.740 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 157], 00:10:34.740 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 180], 00:10:34.740 | 99.99th=[ 184] 00:10:34.740 bw ( KiB/s): min=30160, max=43744, per=28.36%, avg=34529.60, stdev=6214.89, samples=5 00:10:34.740 iops : min= 7540, max=10936, avg=8632.40, stdev=1553.72, samples=5 00:10:34.740 lat (usec) : 50=0.25%, 100=48.29%, 250=51.47% 00:10:34.740 cpu : usr=2.01%, sys=7.14%, ctx=26515, majf=0, minf=1 00:10:34.740 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.740 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.740 issued rwts: total=26511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.740 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.740 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=674560: Tue Dec 10 03:58:29 2024 00:10:34.740 read: IOPS=9102, BW=35.6MiB/s (37.3MB/s)(116MiB/3249msec) 00:10:34.740 slat (usec): min=5, max=18711, avg= 9.35, stdev=178.60 00:10:34.740 clat (usec): min=34, max=215, avg=99.16, stdev=30.01 00:10:34.740 lat (usec): min=52, max=18798, avg=108.51, stdev=181.00 00:10:34.740 clat percentiles (usec): 00:10:34.740 | 1.00th=[ 50], 5.00th=[ 54], 10.00th=[ 60], 20.00th=[ 76], 00:10:34.740 | 30.00th=[ 79], 40.00th=[ 82], 50.00th=[ 87], 60.00th=[ 119], 00:10:34.740 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 157], 00:10:34.740 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 180], 00:10:34.740 | 99.99th=[ 184] 00:10:34.740 bw ( KiB/s): min=30160, max=45000, per=29.02%, avg=35335.17, stdev=6140.85, samples=6 00:10:34.740 iops : min= 7540, max=11250, avg=8833.67, stdev=1535.16, samples=6 00:10:34.740 lat (usec) : 50=1.27%, 100=53.97%, 250=44.75% 00:10:34.740 cpu : usr=2.56%, sys=7.24%, ctx=29580, majf=0, minf=2 00:10:34.740 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.740 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.740 issued rwts: total=29573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.740 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.740 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=674561: Tue Dec 10 03:58:29 2024 00:10:34.740 read: IOPS=7686, BW=30.0MiB/s (31.5MB/s)(86.8MiB/2890msec) 00:10:34.740 slat (usec): min=4, max=15919, avg= 8.46, stdev=146.12 00:10:34.740 clat (usec): min=60, max=383, avg=119.31, stdev=21.64 00:10:34.740 lat (usec): min=67, max=16010, avg=127.77, stdev=147.42 00:10:34.740 clat percentiles (usec): 00:10:34.740 | 1.00th=[ 75], 5.00th=[ 81], 10.00th=[ 86], 20.00th=[ 97], 00:10:34.740 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 125], 00:10:34.740 | 70.00th=[ 127], 80.00th=[ 131], 90.00th=[ 139], 95.00th=[ 161], 00:10:34.740 | 99.00th=[ 172], 99.50th=[ 176], 99.90th=[ 182], 99.95th=[ 186], 00:10:34.740 | 99.99th=[ 343] 00:10:34.740 bw ( KiB/s): min=30128, max=32528, per=25.24%, avg=30736.00, stdev=1014.71, samples=5 00:10:34.740 iops : min= 7532, max= 8132, avg=7684.00, stdev=253.68, samples=5 00:10:34.740 lat (usec) : 100=20.99%, 250=79.00%, 500=0.01% 00:10:34.740 cpu : usr=1.94%, sys=6.58%, ctx=22216, majf=0, minf=2 00:10:34.740 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.740 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.740 issued rwts: total=22214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.740 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.740 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=674562: Tue Dec 10 03:58:29 2024 00:10:34.740 read: IOPS=7686, BW=30.0MiB/s (31.5MB/s)(80.5MiB/2680msec) 00:10:34.740 slat (nsec): min=6140, max=35370, avg=7150.05, stdev=759.93 00:10:34.740 clat (usec): min=64, max=386, avg=121.35, stdev=21.15 00:10:34.740 lat (usec): min=70, max=392, avg=128.50, stdev=21.17 00:10:34.740 clat percentiles (usec): 00:10:34.740 | 1.00th=[ 76], 5.00th=[ 83], 10.00th=[ 86], 20.00th=[ 115], 00:10:34.740 | 30.00th=[ 120], 40.00th=[ 122], 50.00th=[ 124], 60.00th=[ 126], 00:10:34.740 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 143], 95.00th=[ 161], 00:10:34.740 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 186], 99.95th=[ 192], 00:10:34.740 | 99.99th=[ 322] 00:10:34.740 bw ( KiB/s): min=30016, max=33328, per=25.40%, avg=30923.20, stdev=1356.25, samples=5 00:10:34.740 iops : min= 7504, max= 8332, avg=7730.80, stdev=339.06, samples=5 00:10:34.740 lat (usec) : 100=16.97%, 250=83.01%, 500=0.01% 00:10:34.740 cpu : usr=1.72%, sys=6.91%, ctx=20601, majf=0, minf=2 00:10:34.740 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.740 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.740 issued rwts: total=20601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.740 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.740 00:10:34.740 Run status group 0 (all jobs): 00:10:34.740 READ: bw=119MiB/s (125MB/s), 30.0MiB/s-35.6MiB/s (31.5MB/s-37.3MB/s), io=386MiB (405MB), run=2680-3249msec 00:10:34.740 00:10:34.740 Disk stats (read/write): 00:10:34.740 nvme0n1: ios=24585/0, merge=0/0, ticks=2533/0, in_queue=2533, util=95.03% 00:10:34.740 nvme0n2: ios=27531/0, merge=0/0, ticks=2697/0, in_queue=2697, util=94.25% 00:10:34.740 nvme0n3: ios=22213/0, merge=0/0, ticks=2586/0, in_queue=2586, util=95.58% 00:10:34.740 nvme0n4: ios=20173/0, merge=0/0, ticks=2352/0, in_queue=2352, util=96.46% 00:10:34.999 03:58:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.999 03:58:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:35.257 03:58:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.257 03:58:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:35.257 03:58:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.257 03:58:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:35.515 03:58:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.515 03:58:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:35.772 03:58:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:35.772 03:58:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 674377 00:10:35.772 03:58:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:35.772 03:58:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.704 03:58:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.704 03:58:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:36.704 03:58:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:36.704 03:58:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.704 03:58:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:36.704 03:58:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.704 03:58:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:36.704 03:58:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:36.704 03:58:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:36.704 nvmf hotplug test: fio failed as expected 00:10:36.704 03:58:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:36.962 rmmod nvme_rdma 00:10:36.962 rmmod nvme_fabrics 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 671334 ']' 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 671334 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 671334 ']' 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 671334 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 671334 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 671334' 00:10:36.962 killing process with pid 671334 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 671334 00:10:36.962 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 671334 00:10:37.220 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:37.220 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:37.220 00:10:37.220 real 0m24.850s 00:10:37.220 user 2m1.056s 00:10:37.220 sys 0m8.950s 00:10:37.220 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.220 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 ************************************ 00:10:37.220 END TEST nvmf_fio_target 00:10:37.220 ************************************ 00:10:37.220 03:58:31 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:37.220 03:58:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.220 03:58:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.220 03:58:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 ************************************ 00:10:37.220 START TEST nvmf_bdevio 00:10:37.220 ************************************ 00:10:37.220 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:37.220 * Looking for test storage... 00:10:37.220 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:37.220 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:37.220 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:37.220 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:37.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.479 --rc genhtml_branch_coverage=1 00:10:37.479 --rc genhtml_function_coverage=1 00:10:37.479 --rc genhtml_legend=1 00:10:37.479 --rc geninfo_all_blocks=1 00:10:37.479 --rc geninfo_unexecuted_blocks=1 00:10:37.479 00:10:37.479 ' 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:37.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.479 --rc genhtml_branch_coverage=1 00:10:37.479 --rc genhtml_function_coverage=1 00:10:37.479 --rc genhtml_legend=1 00:10:37.479 --rc geninfo_all_blocks=1 00:10:37.479 --rc geninfo_unexecuted_blocks=1 00:10:37.479 00:10:37.479 ' 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:37.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.479 --rc genhtml_branch_coverage=1 00:10:37.479 --rc genhtml_function_coverage=1 00:10:37.479 --rc genhtml_legend=1 00:10:37.479 --rc geninfo_all_blocks=1 00:10:37.479 --rc geninfo_unexecuted_blocks=1 00:10:37.479 00:10:37.479 ' 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:37.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.479 --rc genhtml_branch_coverage=1 00:10:37.479 --rc genhtml_function_coverage=1 00:10:37.479 --rc genhtml_legend=1 00:10:37.479 --rc geninfo_all_blocks=1 00:10:37.479 --rc geninfo_unexecuted_blocks=1 00:10:37.479 00:10:37.479 ' 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.479 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.480 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.480 03:58:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:42.750 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:42.750 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:42.750 Found net devices under 0000:18:00.0: mlx_0_0 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:42.750 Found net devices under 0000:18:00.1: mlx_0_1 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:10:42.750 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:42.751 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:42.751 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:10:42.751 altname enp24s0f0np0 00:10:42.751 altname ens785f0np0 00:10:42.751 inet 192.168.100.8/24 scope global mlx_0_0 00:10:42.751 valid_lft forever preferred_lft forever 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:42.751 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:42.751 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:10:42.751 altname enp24s0f1np1 00:10:42.751 altname ens785f1np1 00:10:42.751 inet 192.168.100.9/24 scope global mlx_0_1 00:10:42.751 valid_lft forever preferred_lft forever 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:42.751 192.168.100.9' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:42.751 192.168.100.9' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:42.751 192.168.100.9' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=678768 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 678768 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 678768 ']' 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.751 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.752 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:42.752 [2024-12-10 03:58:36.757942] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:10:42.752 [2024-12-10 03:58:36.757989] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.752 [2024-12-10 03:58:36.815575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.752 [2024-12-10 03:58:36.854194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.752 [2024-12-10 03:58:36.854227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.752 [2024-12-10 03:58:36.854234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.752 [2024-12-10 03:58:36.854240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.752 [2024-12-10 03:58:36.854246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.752 [2024-12-10 03:58:36.855620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:42.752 [2024-12-10 03:58:36.855727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:42.752 [2024-12-10 03:58:36.855831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.752 [2024-12-10 03:58:36.855833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:42.752 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.752 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:42.752 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:42.752 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.752 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.752 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.752 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:42.752 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.752 03:58:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.752 [2024-12-10 03:58:37.005617] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bc49c0/0x1bc8eb0) succeed. 00:10:42.752 [2024-12-10 03:58:37.013930] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bc6050/0x1c0a550) succeed. 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.010 Malloc0 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.010 [2024-12-10 03:58:37.176914] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:43.010 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.011 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:43.011 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:43.011 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:43.011 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:43.011 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:43.011 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:43.011 { 00:10:43.011 "params": { 00:10:43.011 "name": "Nvme$subsystem", 00:10:43.011 "trtype": "$TEST_TRANSPORT", 00:10:43.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:43.011 "adrfam": "ipv4", 00:10:43.011 "trsvcid": "$NVMF_PORT", 00:10:43.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:43.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:43.011 "hdgst": ${hdgst:-false}, 00:10:43.011 "ddgst": ${ddgst:-false} 00:10:43.011 }, 00:10:43.011 "method": "bdev_nvme_attach_controller" 00:10:43.011 } 00:10:43.011 EOF 00:10:43.011 )") 00:10:43.011 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:43.011 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:43.011 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:43.011 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:43.011 "params": { 00:10:43.011 "name": "Nvme1", 00:10:43.011 "trtype": "rdma", 00:10:43.011 "traddr": "192.168.100.8", 00:10:43.011 "adrfam": "ipv4", 00:10:43.011 "trsvcid": "4420", 00:10:43.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:43.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:43.011 "hdgst": false, 00:10:43.011 "ddgst": false 00:10:43.011 }, 00:10:43.011 "method": "bdev_nvme_attach_controller" 00:10:43.011 }' 00:10:43.011 [2024-12-10 03:58:37.225276] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:10:43.011 [2024-12-10 03:58:37.225319] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678920 ] 00:10:43.011 [2024-12-10 03:58:37.283559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:43.011 [2024-12-10 03:58:37.323946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.011 [2024-12-10 03:58:37.324039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.011 [2024-12-10 03:58:37.324039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.269 I/O targets: 00:10:43.269 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:43.269 00:10:43.269 00:10:43.270 CUnit - A unit testing framework for C - Version 2.1-3 00:10:43.270 http://cunit.sourceforge.net/ 00:10:43.270 00:10:43.270 00:10:43.270 Suite: bdevio tests on: Nvme1n1 00:10:43.270 Test: blockdev write read block ...passed 00:10:43.270 Test: blockdev write zeroes read block ...passed 00:10:43.270 Test: blockdev write zeroes read no split ...passed 00:10:43.270 Test: blockdev write zeroes read split ...passed 00:10:43.270 Test: blockdev write zeroes read split partial ...passed 00:10:43.270 Test: blockdev reset ...[2024-12-10 03:58:37.529082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:43.270 [2024-12-10 03:58:37.550781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:10:43.270 [2024-12-10 03:58:37.578682] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:43.270 passed 00:10:43.270 Test: blockdev write read 8 blocks ...passed 00:10:43.270 Test: blockdev write read size > 128k ...passed 00:10:43.270 Test: blockdev write read invalid size ...passed 00:10:43.270 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:43.270 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:43.270 Test: blockdev write read max offset ...passed 00:10:43.270 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:43.270 Test: blockdev writev readv 8 blocks ...passed 00:10:43.270 Test: blockdev writev readv 30 x 1block ...passed 00:10:43.270 Test: blockdev writev readv block ...passed 00:10:43.270 Test: blockdev writev readv size > 128k ...passed 00:10:43.270 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:43.270 Test: blockdev comparev and writev ...[2024-12-10 03:58:37.581434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.270 [2024-12-10 03:58:37.581461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:43.270 [2024-12-10 03:58:37.581470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.270 [2024-12-10 03:58:37.581480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:43.270 [2024-12-10 03:58:37.581642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.270 [2024-12-10 03:58:37.581651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:43.270 [2024-12-10 03:58:37.581659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.270 [2024-12-10 03:58:37.581666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:43.270 [2024-12-10 03:58:37.581830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.270 [2024-12-10 03:58:37.581839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:43.270 [2024-12-10 03:58:37.581846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.270 [2024-12-10 03:58:37.581853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:43.270 [2024-12-10 03:58:37.582018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.270 [2024-12-10 03:58:37.582028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:43.270 [2024-12-10 03:58:37.582035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.270 [2024-12-10 03:58:37.582042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:43.270 passed 00:10:43.270 Test: blockdev nvme passthru rw ...passed 00:10:43.270 Test: blockdev nvme passthru vendor specific ...[2024-12-10 03:58:37.582289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:43.270 [2024-12-10 03:58:37.582299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:43.270 [2024-12-10 03:58:37.582341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:43.270 [2024-12-10 03:58:37.582349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:43.270 [2024-12-10 03:58:37.582384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:43.270 [2024-12-10 03:58:37.582393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:43.270 [2024-12-10 03:58:37.582435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:43.270 [2024-12-10 03:58:37.582443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:43.270 passed 00:10:43.270 Test: blockdev nvme admin passthru ...passed 00:10:43.270 Test: blockdev copy ...passed 00:10:43.270 00:10:43.270 Run Summary: Type Total Ran Passed Failed Inactive 00:10:43.270 suites 1 1 n/a 0 0 00:10:43.270 tests 23 23 23 0 0 00:10:43.270 asserts 152 152 152 0 n/a 00:10:43.270 00:10:43.270 Elapsed time = 0.169 seconds 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:43.529 rmmod nvme_rdma 00:10:43.529 rmmod nvme_fabrics 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 678768 ']' 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 678768 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 678768 ']' 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 678768 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 678768 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 678768' 00:10:43.529 killing process with pid 678768 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 678768 00:10:43.529 03:58:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 678768 00:10:43.789 03:58:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:43.789 03:58:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:43.789 00:10:43.789 real 0m6.586s 00:10:43.789 user 0m7.211s 00:10:43.789 sys 0m4.312s 00:10:43.789 03:58:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.789 03:58:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.789 ************************************ 00:10:43.789 END TEST nvmf_bdevio 00:10:43.789 ************************************ 00:10:43.789 03:58:38 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:43.789 00:10:43.789 real 3m47.360s 00:10:43.789 user 10m24.240s 00:10:43.789 sys 1m18.579s 00:10:43.789 03:58:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.789 03:58:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.789 ************************************ 00:10:43.789 END TEST nvmf_target_core 00:10:43.789 ************************************ 00:10:43.789 03:58:38 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:43.789 03:58:38 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.789 03:58:38 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.789 03:58:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:44.049 ************************************ 00:10:44.050 START TEST nvmf_target_extra 00:10:44.050 ************************************ 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:44.050 * Looking for test storage... 00:10:44.050 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.050 --rc genhtml_branch_coverage=1 00:10:44.050 --rc genhtml_function_coverage=1 00:10:44.050 --rc genhtml_legend=1 00:10:44.050 --rc geninfo_all_blocks=1 00:10:44.050 --rc geninfo_unexecuted_blocks=1 00:10:44.050 00:10:44.050 ' 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.050 --rc genhtml_branch_coverage=1 00:10:44.050 --rc genhtml_function_coverage=1 00:10:44.050 --rc genhtml_legend=1 00:10:44.050 --rc geninfo_all_blocks=1 00:10:44.050 --rc geninfo_unexecuted_blocks=1 00:10:44.050 00:10:44.050 ' 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.050 --rc genhtml_branch_coverage=1 00:10:44.050 --rc genhtml_function_coverage=1 00:10:44.050 --rc genhtml_legend=1 00:10:44.050 --rc geninfo_all_blocks=1 00:10:44.050 --rc geninfo_unexecuted_blocks=1 00:10:44.050 00:10:44.050 ' 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.050 --rc genhtml_branch_coverage=1 00:10:44.050 --rc genhtml_function_coverage=1 00:10:44.050 --rc genhtml_legend=1 00:10:44.050 --rc geninfo_all_blocks=1 00:10:44.050 --rc geninfo_unexecuted_blocks=1 00:10:44.050 00:10:44.050 ' 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.050 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:44.050 ************************************ 00:10:44.050 START TEST nvmf_example 00:10:44.050 ************************************ 00:10:44.050 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:44.310 * Looking for test storage... 00:10:44.310 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:44.310 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.311 --rc genhtml_branch_coverage=1 00:10:44.311 --rc genhtml_function_coverage=1 00:10:44.311 --rc genhtml_legend=1 00:10:44.311 --rc geninfo_all_blocks=1 00:10:44.311 --rc geninfo_unexecuted_blocks=1 00:10:44.311 00:10:44.311 ' 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.311 --rc genhtml_branch_coverage=1 00:10:44.311 --rc genhtml_function_coverage=1 00:10:44.311 --rc genhtml_legend=1 00:10:44.311 --rc geninfo_all_blocks=1 00:10:44.311 --rc geninfo_unexecuted_blocks=1 00:10:44.311 00:10:44.311 ' 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.311 --rc genhtml_branch_coverage=1 00:10:44.311 --rc genhtml_function_coverage=1 00:10:44.311 --rc genhtml_legend=1 00:10:44.311 --rc geninfo_all_blocks=1 00:10:44.311 --rc geninfo_unexecuted_blocks=1 00:10:44.311 00:10:44.311 ' 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.311 --rc genhtml_branch_coverage=1 00:10:44.311 --rc genhtml_function_coverage=1 00:10:44.311 --rc genhtml_legend=1 00:10:44.311 --rc geninfo_all_blocks=1 00:10:44.311 --rc geninfo_unexecuted_blocks=1 00:10:44.311 00:10:44.311 ' 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.311 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.311 03:58:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:50.884 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.884 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.884 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.884 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.884 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.884 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.884 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:50.885 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:50.885 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:50.885 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:50.886 Found net devices under 0000:18:00.0: mlx_0_0 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:50.886 Found net devices under 0000:18:00.1: mlx_0_1 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:50.886 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:50.887 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:50.887 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:10:50.887 altname enp24s0f0np0 00:10:50.887 altname ens785f0np0 00:10:50.887 inet 192.168.100.8/24 scope global mlx_0_0 00:10:50.887 valid_lft forever preferred_lft forever 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:50.887 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:50.887 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:10:50.887 altname enp24s0f1np1 00:10:50.887 altname ens785f1np1 00:10:50.887 inet 192.168.100.9/24 scope global mlx_0_1 00:10:50.887 valid_lft forever preferred_lft forever 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:50.887 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:50.888 192.168.100.9' 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:50.888 192.168.100.9' 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:50.888 192.168.100.9' 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=682497 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 682497 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 682497 ']' 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.888 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.889 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.889 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.889 03:58:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:50.889 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.889 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:50.889 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:50.889 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.889 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:50.889 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:50.889 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.889 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.148 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:51.149 03:58:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:03.499 Initializing NVMe Controllers 00:11:03.499 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:03.499 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:03.499 Initialization complete. Launching workers. 00:11:03.499 ======================================================== 00:11:03.499 Latency(us) 00:11:03.499 Device Information : IOPS MiB/s Average min max 00:11:03.499 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 25329.90 98.94 2526.49 609.67 12437.97 00:11:03.499 ======================================================== 00:11:03.499 Total : 25329.90 98.94 2526.49 609.67 12437.97 00:11:03.499 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:03.499 rmmod nvme_rdma 00:11:03.499 rmmod nvme_fabrics 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 682497 ']' 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 682497 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 682497 ']' 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 682497 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 682497 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 682497' 00:11:03.499 killing process with pid 682497 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 682497 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 682497 00:11:03.499 nvmf threads initialize successfully 00:11:03.499 bdev subsystem init successfully 00:11:03.499 created a nvmf target service 00:11:03.499 create targets's poll groups done 00:11:03.499 all subsystems of target started 00:11:03.499 nvmf target is running 00:11:03.499 all subsystems of target stopped 00:11:03.499 destroy targets's poll groups done 00:11:03.499 destroyed the nvmf target service 00:11:03.499 bdev subsystem finish successfully 00:11:03.499 nvmf threads destroy successfully 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.499 03:58:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.499 00:11:03.499 real 0m18.608s 00:11:03.499 user 0m51.828s 00:11:03.499 sys 0m4.737s 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.499 ************************************ 00:11:03.499 END TEST nvmf_example 00:11:03.499 ************************************ 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:03.499 ************************************ 00:11:03.499 START TEST nvmf_filesystem 00:11:03.499 ************************************ 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:11:03.499 * Looking for test storage... 00:11:03.499 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.499 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:03.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.499 --rc genhtml_branch_coverage=1 00:11:03.499 --rc genhtml_function_coverage=1 00:11:03.499 --rc genhtml_legend=1 00:11:03.499 --rc geninfo_all_blocks=1 00:11:03.499 --rc geninfo_unexecuted_blocks=1 00:11:03.500 00:11:03.500 ' 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:03.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.500 --rc genhtml_branch_coverage=1 00:11:03.500 --rc genhtml_function_coverage=1 00:11:03.500 --rc genhtml_legend=1 00:11:03.500 --rc geninfo_all_blocks=1 00:11:03.500 --rc geninfo_unexecuted_blocks=1 00:11:03.500 00:11:03.500 ' 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:03.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.500 --rc genhtml_branch_coverage=1 00:11:03.500 --rc genhtml_function_coverage=1 00:11:03.500 --rc genhtml_legend=1 00:11:03.500 --rc geninfo_all_blocks=1 00:11:03.500 --rc geninfo_unexecuted_blocks=1 00:11:03.500 00:11:03.500 ' 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:03.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.500 --rc genhtml_branch_coverage=1 00:11:03.500 --rc genhtml_function_coverage=1 00:11:03.500 --rc genhtml_legend=1 00:11:03.500 --rc geninfo_all_blocks=1 00:11:03.500 --rc geninfo_unexecuted_blocks=1 00:11:03.500 00:11:03.500 ' 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:03.500 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:03.501 #define SPDK_CONFIG_H 00:11:03.501 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:03.501 #define SPDK_CONFIG_APPS 1 00:11:03.501 #define SPDK_CONFIG_ARCH native 00:11:03.501 #undef SPDK_CONFIG_ASAN 00:11:03.501 #undef SPDK_CONFIG_AVAHI 00:11:03.501 #undef SPDK_CONFIG_CET 00:11:03.501 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:03.501 #define SPDK_CONFIG_COVERAGE 1 00:11:03.501 #define SPDK_CONFIG_CROSS_PREFIX 00:11:03.501 #undef SPDK_CONFIG_CRYPTO 00:11:03.501 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:03.501 #undef SPDK_CONFIG_CUSTOMOCF 00:11:03.501 #undef SPDK_CONFIG_DAOS 00:11:03.501 #define SPDK_CONFIG_DAOS_DIR 00:11:03.501 #define SPDK_CONFIG_DEBUG 1 00:11:03.501 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:03.501 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:11:03.501 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:03.501 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:03.501 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:03.501 #undef SPDK_CONFIG_DPDK_UADK 00:11:03.501 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:11:03.501 #define SPDK_CONFIG_EXAMPLES 1 00:11:03.501 #undef SPDK_CONFIG_FC 00:11:03.501 #define SPDK_CONFIG_FC_PATH 00:11:03.501 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:03.501 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:03.501 #define SPDK_CONFIG_FSDEV 1 00:11:03.501 #undef SPDK_CONFIG_FUSE 00:11:03.501 #undef SPDK_CONFIG_FUZZER 00:11:03.501 #define SPDK_CONFIG_FUZZER_LIB 00:11:03.501 #undef SPDK_CONFIG_GOLANG 00:11:03.501 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:03.501 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:03.501 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:03.501 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:03.501 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:03.501 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:03.501 #undef SPDK_CONFIG_HAVE_LZ4 00:11:03.501 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:03.501 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:03.501 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:03.501 #define SPDK_CONFIG_IDXD 1 00:11:03.501 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:03.501 #undef SPDK_CONFIG_IPSEC_MB 00:11:03.501 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:03.501 #define SPDK_CONFIG_ISAL 1 00:11:03.501 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:03.501 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:03.501 #define SPDK_CONFIG_LIBDIR 00:11:03.501 #undef SPDK_CONFIG_LTO 00:11:03.501 #define SPDK_CONFIG_MAX_LCORES 128 00:11:03.501 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:03.501 #define SPDK_CONFIG_NVME_CUSE 1 00:11:03.501 #undef SPDK_CONFIG_OCF 00:11:03.501 #define SPDK_CONFIG_OCF_PATH 00:11:03.501 #define SPDK_CONFIG_OPENSSL_PATH 00:11:03.501 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:03.501 #define SPDK_CONFIG_PGO_DIR 00:11:03.501 #undef SPDK_CONFIG_PGO_USE 00:11:03.501 #define SPDK_CONFIG_PREFIX /usr/local 00:11:03.501 #undef SPDK_CONFIG_RAID5F 00:11:03.501 #undef SPDK_CONFIG_RBD 00:11:03.501 #define SPDK_CONFIG_RDMA 1 00:11:03.501 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:03.501 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:03.501 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:03.501 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:03.501 #define SPDK_CONFIG_SHARED 1 00:11:03.501 #undef SPDK_CONFIG_SMA 00:11:03.501 #define SPDK_CONFIG_TESTS 1 00:11:03.501 #undef SPDK_CONFIG_TSAN 00:11:03.501 #define SPDK_CONFIG_UBLK 1 00:11:03.501 #define SPDK_CONFIG_UBSAN 1 00:11:03.501 #undef SPDK_CONFIG_UNIT_TESTS 00:11:03.501 #undef SPDK_CONFIG_URING 00:11:03.501 #define SPDK_CONFIG_URING_PATH 00:11:03.501 #undef SPDK_CONFIG_URING_ZNS 00:11:03.501 #undef SPDK_CONFIG_USDT 00:11:03.501 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:03.501 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:03.501 #undef SPDK_CONFIG_VFIO_USER 00:11:03.501 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:03.501 #define SPDK_CONFIG_VHOST 1 00:11:03.501 #define SPDK_CONFIG_VIRTIO 1 00:11:03.501 #undef SPDK_CONFIG_VTUNE 00:11:03.501 #define SPDK_CONFIG_VTUNE_DIR 00:11:03.501 #define SPDK_CONFIG_WERROR 1 00:11:03.501 #define SPDK_CONFIG_WPDK_DIR 00:11:03.501 #undef SPDK_CONFIG_XNVME 00:11:03.501 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:03.501 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:03.502 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:03.503 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 684938 ]] 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 684938 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.xwN8Wy 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.xwN8Wy/tests/target /tmp/spdk.xwN8Wy 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67952492544 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=78631596032 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10679103488 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39254102016 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39315795968 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=61693952 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=15703203840 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=15726321664 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23117824 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39315181568 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39315800064 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=618496 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=7863144448 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=7863156736 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:03.504 * Looking for test storage... 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:03.504 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=67952492544 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=12893696000 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:03.505 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:03.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.505 --rc genhtml_branch_coverage=1 00:11:03.505 --rc genhtml_function_coverage=1 00:11:03.505 --rc genhtml_legend=1 00:11:03.505 --rc geninfo_all_blocks=1 00:11:03.505 --rc geninfo_unexecuted_blocks=1 00:11:03.505 00:11:03.505 ' 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:03.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.505 --rc genhtml_branch_coverage=1 00:11:03.505 --rc genhtml_function_coverage=1 00:11:03.505 --rc genhtml_legend=1 00:11:03.505 --rc geninfo_all_blocks=1 00:11:03.505 --rc geninfo_unexecuted_blocks=1 00:11:03.505 00:11:03.505 ' 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:03.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.505 --rc genhtml_branch_coverage=1 00:11:03.505 --rc genhtml_function_coverage=1 00:11:03.505 --rc genhtml_legend=1 00:11:03.505 --rc geninfo_all_blocks=1 00:11:03.505 --rc geninfo_unexecuted_blocks=1 00:11:03.505 00:11:03.505 ' 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:03.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.505 --rc genhtml_branch_coverage=1 00:11:03.505 --rc genhtml_function_coverage=1 00:11:03.505 --rc genhtml_legend=1 00:11:03.505 --rc geninfo_all_blocks=1 00:11:03.505 --rc geninfo_unexecuted_blocks=1 00:11:03.505 00:11:03.505 ' 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.505 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.506 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:03.506 03:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:10.070 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:10.070 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:10.070 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:10.071 Found net devices under 0000:18:00.0: mlx_0_0 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:10.071 Found net devices under 0000:18:00.1: mlx_0_1 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:10.071 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:10.071 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:10.071 altname enp24s0f0np0 00:11:10.071 altname ens785f0np0 00:11:10.071 inet 192.168.100.8/24 scope global mlx_0_0 00:11:10.071 valid_lft forever preferred_lft forever 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:10.071 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:10.071 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:10.071 altname enp24s0f1np1 00:11:10.071 altname ens785f1np1 00:11:10.071 inet 192.168.100.9/24 scope global mlx_0_1 00:11:10.071 valid_lft forever preferred_lft forever 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:10.071 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:10.072 192.168.100.9' 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:10.072 192.168.100.9' 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:10.072 192.168.100.9' 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:10.072 ************************************ 00:11:10.072 START TEST nvmf_filesystem_no_in_capsule 00:11:10.072 ************************************ 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=688187 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 688187 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 688187 ']' 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.072 [2024-12-10 03:59:03.484437] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:10.072 [2024-12-10 03:59:03.484480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.072 [2024-12-10 03:59:03.541614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.072 [2024-12-10 03:59:03.581944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.072 [2024-12-10 03:59:03.581979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.072 [2024-12-10 03:59:03.581985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.072 [2024-12-10 03:59:03.581991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.072 [2024-12-10 03:59:03.581996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.072 [2024-12-10 03:59:03.586285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.072 [2024-12-10 03:59:03.586303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.072 [2024-12-10 03:59:03.586391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.072 [2024-12-10 03:59:03.586393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.072 [2024-12-10 03:59:03.731291] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:10.072 [2024-12-10 03:59:03.750028] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12420c0/0x12465b0) succeed. 00:11:10.072 [2024-12-10 03:59:03.758273] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1243750/0x1287c50) succeed. 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.072 Malloc1 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.072 03:59:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.072 [2024-12-10 03:59:04.000251] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:10.072 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.072 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:10.072 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:10.072 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:10.072 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:10.072 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:10.072 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:10.072 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.072 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.072 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.072 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:10.072 { 00:11:10.072 "name": "Malloc1", 00:11:10.072 "aliases": [ 00:11:10.072 "e59e8de0-0b5f-4bcb-910f-c856d11feded" 00:11:10.072 ], 00:11:10.072 "product_name": "Malloc disk", 00:11:10.072 "block_size": 512, 00:11:10.072 "num_blocks": 1048576, 00:11:10.072 "uuid": "e59e8de0-0b5f-4bcb-910f-c856d11feded", 00:11:10.072 "assigned_rate_limits": { 00:11:10.072 "rw_ios_per_sec": 0, 00:11:10.072 "rw_mbytes_per_sec": 0, 00:11:10.072 "r_mbytes_per_sec": 0, 00:11:10.073 "w_mbytes_per_sec": 0 00:11:10.073 }, 00:11:10.073 "claimed": true, 00:11:10.073 "claim_type": "exclusive_write", 00:11:10.073 "zoned": false, 00:11:10.073 "supported_io_types": { 00:11:10.073 "read": true, 00:11:10.073 "write": true, 00:11:10.073 "unmap": true, 00:11:10.073 "flush": true, 00:11:10.073 "reset": true, 00:11:10.073 "nvme_admin": false, 00:11:10.073 "nvme_io": false, 00:11:10.073 "nvme_io_md": false, 00:11:10.073 "write_zeroes": true, 00:11:10.073 "zcopy": true, 00:11:10.073 "get_zone_info": false, 00:11:10.073 "zone_management": false, 00:11:10.073 "zone_append": false, 00:11:10.073 "compare": false, 00:11:10.073 "compare_and_write": false, 00:11:10.073 "abort": true, 00:11:10.073 "seek_hole": false, 00:11:10.073 "seek_data": false, 00:11:10.073 "copy": true, 00:11:10.073 "nvme_iov_md": false 00:11:10.073 }, 00:11:10.073 "memory_domains": [ 00:11:10.073 { 00:11:10.073 "dma_device_id": "system", 00:11:10.073 "dma_device_type": 1 00:11:10.073 }, 00:11:10.073 { 00:11:10.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.073 "dma_device_type": 2 00:11:10.073 } 00:11:10.073 ], 00:11:10.073 "driver_specific": {} 00:11:10.073 } 00:11:10.073 ]' 00:11:10.073 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:10.073 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:10.073 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:10.073 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:10.073 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:10.073 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:10.073 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:10.073 03:59:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:11.004 03:59:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:11.004 03:59:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:11.004 03:59:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.004 03:59:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:11.004 03:59:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:12.901 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:12.902 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:12.902 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:12.902 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:12.902 03:59:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.275 ************************************ 00:11:14.275 START TEST filesystem_ext4 00:11:14.275 ************************************ 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:14.275 mke2fs 1.47.0 (5-Feb-2023) 00:11:14.275 Discarding device blocks: 0/522240 done 00:11:14.275 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:14.275 Filesystem UUID: 3c58d437-307a-490f-961e-08a213b19bcf 00:11:14.275 Superblock backups stored on blocks: 00:11:14.275 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:14.275 00:11:14.275 Allocating group tables: 0/64 done 00:11:14.275 Writing inode tables: 0/64 done 00:11:14.275 Creating journal (8192 blocks): done 00:11:14.275 Writing superblocks and filesystem accounting information: 0/64 done 00:11:14.275 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 688187 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:14.275 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:14.276 00:11:14.276 real 0m0.188s 00:11:14.276 user 0m0.027s 00:11:14.276 sys 0m0.062s 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:14.276 ************************************ 00:11:14.276 END TEST filesystem_ext4 00:11:14.276 ************************************ 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.276 ************************************ 00:11:14.276 START TEST filesystem_btrfs 00:11:14.276 ************************************ 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:14.276 btrfs-progs v6.8.1 00:11:14.276 See https://btrfs.readthedocs.io for more information. 00:11:14.276 00:11:14.276 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:14.276 NOTE: several default settings have changed in version 5.15, please make sure 00:11:14.276 this does not affect your deployments: 00:11:14.276 - DUP for metadata (-m dup) 00:11:14.276 - enabled no-holes (-O no-holes) 00:11:14.276 - enabled free-space-tree (-R free-space-tree) 00:11:14.276 00:11:14.276 Label: (null) 00:11:14.276 UUID: 0f8298ff-3a93-44cb-94ba-70c85318a329 00:11:14.276 Node size: 16384 00:11:14.276 Sector size: 4096 (CPU page size: 4096) 00:11:14.276 Filesystem size: 510.00MiB 00:11:14.276 Block group profiles: 00:11:14.276 Data: single 8.00MiB 00:11:14.276 Metadata: DUP 32.00MiB 00:11:14.276 System: DUP 8.00MiB 00:11:14.276 SSD detected: yes 00:11:14.276 Zoned device: no 00:11:14.276 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:14.276 Checksum: crc32c 00:11:14.276 Number of devices: 1 00:11:14.276 Devices: 00:11:14.276 ID SIZE PATH 00:11:14.276 1 510.00MiB /dev/nvme0n1p1 00:11:14.276 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:14.276 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 688187 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:14.534 00:11:14.534 real 0m0.224s 00:11:14.534 user 0m0.023s 00:11:14.534 sys 0m0.111s 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:14.534 ************************************ 00:11:14.534 END TEST filesystem_btrfs 00:11:14.534 ************************************ 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.534 ************************************ 00:11:14.534 START TEST filesystem_xfs 00:11:14.534 ************************************ 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:14.534 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:14.534 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:14.534 = sectsz=512 attr=2, projid32bit=1 00:11:14.534 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:14.534 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:14.535 data = bsize=4096 blocks=130560, imaxpct=25 00:11:14.535 = sunit=0 swidth=0 blks 00:11:14.535 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:14.535 log =internal log bsize=4096 blocks=16384, version=2 00:11:14.535 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:14.535 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:14.793 Discarding blocks...Done. 00:11:14.793 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:14.793 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:14.793 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:14.793 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:14.793 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:14.793 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:14.793 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:14.793 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:14.793 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 688187 00:11:14.793 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:14.793 03:59:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:14.793 03:59:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:14.793 03:59:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:14.793 00:11:14.793 real 0m0.198s 00:11:14.793 user 0m0.022s 00:11:14.793 sys 0m0.067s 00:11:14.793 03:59:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.793 03:59:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:14.793 ************************************ 00:11:14.793 END TEST filesystem_xfs 00:11:14.793 ************************************ 00:11:14.793 03:59:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:14.793 03:59:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:14.793 03:59:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 688187 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 688187 ']' 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 688187 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.727 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 688187 00:11:15.985 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.985 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.985 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 688187' 00:11:15.985 killing process with pid 688187 00:11:15.985 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 688187 00:11:15.985 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 688187 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:16.244 00:11:16.244 real 0m7.035s 00:11:16.244 user 0m27.476s 00:11:16.244 sys 0m0.986s 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.244 ************************************ 00:11:16.244 END TEST nvmf_filesystem_no_in_capsule 00:11:16.244 ************************************ 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:16.244 ************************************ 00:11:16.244 START TEST nvmf_filesystem_in_capsule 00:11:16.244 ************************************ 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=689710 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 689710 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 689710 ']' 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.244 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.244 [2024-12-10 03:59:10.597043] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:16.244 [2024-12-10 03:59:10.597087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.503 [2024-12-10 03:59:10.656320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.503 [2024-12-10 03:59:10.692288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.503 [2024-12-10 03:59:10.692341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.503 [2024-12-10 03:59:10.692349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.503 [2024-12-10 03:59:10.692355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.503 [2024-12-10 03:59:10.692359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.503 [2024-12-10 03:59:10.693749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.503 [2024-12-10 03:59:10.693843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.503 [2024-12-10 03:59:10.693933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.503 [2024-12-10 03:59:10.693935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.503 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.503 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:16.503 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.503 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.503 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.503 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.503 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:16.503 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:11:16.503 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.503 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.503 [2024-12-10 03:59:10.857745] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fdd0c0/0x1fe15b0) succeed. 00:11:16.503 [2024-12-10 03:59:10.865921] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fde750/0x2022c50) succeed. 00:11:16.762 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.762 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:16.762 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.762 03:59:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.762 Malloc1 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.762 [2024-12-10 03:59:11.127374] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.762 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.020 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.020 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:17.020 { 00:11:17.020 "name": "Malloc1", 00:11:17.020 "aliases": [ 00:11:17.020 "e7bfb107-d854-49d6-8c72-0511f16bfe5b" 00:11:17.020 ], 00:11:17.020 "product_name": "Malloc disk", 00:11:17.020 "block_size": 512, 00:11:17.020 "num_blocks": 1048576, 00:11:17.020 "uuid": "e7bfb107-d854-49d6-8c72-0511f16bfe5b", 00:11:17.020 "assigned_rate_limits": { 00:11:17.020 "rw_ios_per_sec": 0, 00:11:17.020 "rw_mbytes_per_sec": 0, 00:11:17.020 "r_mbytes_per_sec": 0, 00:11:17.020 "w_mbytes_per_sec": 0 00:11:17.020 }, 00:11:17.020 "claimed": true, 00:11:17.020 "claim_type": "exclusive_write", 00:11:17.020 "zoned": false, 00:11:17.020 "supported_io_types": { 00:11:17.020 "read": true, 00:11:17.020 "write": true, 00:11:17.020 "unmap": true, 00:11:17.020 "flush": true, 00:11:17.020 "reset": true, 00:11:17.020 "nvme_admin": false, 00:11:17.020 "nvme_io": false, 00:11:17.020 "nvme_io_md": false, 00:11:17.020 "write_zeroes": true, 00:11:17.020 "zcopy": true, 00:11:17.020 "get_zone_info": false, 00:11:17.020 "zone_management": false, 00:11:17.020 "zone_append": false, 00:11:17.020 "compare": false, 00:11:17.020 "compare_and_write": false, 00:11:17.020 "abort": true, 00:11:17.020 "seek_hole": false, 00:11:17.020 "seek_data": false, 00:11:17.020 "copy": true, 00:11:17.020 "nvme_iov_md": false 00:11:17.020 }, 00:11:17.020 "memory_domains": [ 00:11:17.020 { 00:11:17.020 "dma_device_id": "system", 00:11:17.020 "dma_device_type": 1 00:11:17.020 }, 00:11:17.020 { 00:11:17.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.020 "dma_device_type": 2 00:11:17.020 } 00:11:17.020 ], 00:11:17.020 "driver_specific": {} 00:11:17.020 } 00:11:17.020 ]' 00:11:17.020 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:17.020 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:17.020 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:17.020 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:17.020 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:17.020 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:17.020 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:17.020 03:59:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:17.953 03:59:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:17.953 03:59:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:17.953 03:59:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:17.953 03:59:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:17.953 03:59:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:19.852 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:19.852 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:19.852 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.852 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:19.852 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.852 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:19.852 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:19.852 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:20.109 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:20.109 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:20.109 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:20.109 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:20.109 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:20.109 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:20.109 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:20.109 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:20.109 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:20.109 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:20.109 03:59:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.042 ************************************ 00:11:21.042 START TEST filesystem_in_capsule_ext4 00:11:21.042 ************************************ 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:21.042 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:21.042 mke2fs 1.47.0 (5-Feb-2023) 00:11:21.300 Discarding device blocks: 0/522240 done 00:11:21.300 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:21.300 Filesystem UUID: 9d178833-281c-4859-bf28-bed79ff15fa3 00:11:21.300 Superblock backups stored on blocks: 00:11:21.300 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:21.300 00:11:21.300 Allocating group tables: 0/64 done 00:11:21.300 Writing inode tables: 0/64 done 00:11:21.300 Creating journal (8192 blocks): done 00:11:21.300 Writing superblocks and filesystem accounting information: 0/64 done 00:11:21.300 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 689710 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:21.300 00:11:21.300 real 0m0.179s 00:11:21.300 user 0m0.020s 00:11:21.300 sys 0m0.064s 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:21.300 ************************************ 00:11:21.300 END TEST filesystem_in_capsule_ext4 00:11:21.300 ************************************ 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.300 ************************************ 00:11:21.300 START TEST filesystem_in_capsule_btrfs 00:11:21.300 ************************************ 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:21.300 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:21.558 btrfs-progs v6.8.1 00:11:21.558 See https://btrfs.readthedocs.io for more information. 00:11:21.558 00:11:21.558 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:21.558 NOTE: several default settings have changed in version 5.15, please make sure 00:11:21.558 this does not affect your deployments: 00:11:21.558 - DUP for metadata (-m dup) 00:11:21.558 - enabled no-holes (-O no-holes) 00:11:21.558 - enabled free-space-tree (-R free-space-tree) 00:11:21.558 00:11:21.558 Label: (null) 00:11:21.558 UUID: 393a1c6c-a340-4e6b-84d4-c97edda112db 00:11:21.558 Node size: 16384 00:11:21.558 Sector size: 4096 (CPU page size: 4096) 00:11:21.558 Filesystem size: 510.00MiB 00:11:21.558 Block group profiles: 00:11:21.558 Data: single 8.00MiB 00:11:21.558 Metadata: DUP 32.00MiB 00:11:21.558 System: DUP 8.00MiB 00:11:21.558 SSD detected: yes 00:11:21.558 Zoned device: no 00:11:21.558 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:21.558 Checksum: crc32c 00:11:21.558 Number of devices: 1 00:11:21.558 Devices: 00:11:21.558 ID SIZE PATH 00:11:21.558 1 510.00MiB /dev/nvme0n1p1 00:11:21.558 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 689710 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:21.558 00:11:21.558 real 0m0.226s 00:11:21.558 user 0m0.023s 00:11:21.558 sys 0m0.111s 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:21.558 ************************************ 00:11:21.558 END TEST filesystem_in_capsule_btrfs 00:11:21.558 ************************************ 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.558 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.816 ************************************ 00:11:21.816 START TEST filesystem_in_capsule_xfs 00:11:21.816 ************************************ 00:11:21.816 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:21.816 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:21.816 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.816 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:21.816 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:21.816 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:21.816 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:21.816 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:21.816 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:21.816 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:21.816 03:59:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:21.816 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:21.816 = sectsz=512 attr=2, projid32bit=1 00:11:21.816 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:21.816 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:21.816 data = bsize=4096 blocks=130560, imaxpct=25 00:11:21.816 = sunit=0 swidth=0 blks 00:11:21.816 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:21.816 log =internal log bsize=4096 blocks=16384, version=2 00:11:21.816 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:21.816 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:21.816 Discarding blocks...Done. 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 689710 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:21.816 00:11:21.816 real 0m0.180s 00:11:21.816 user 0m0.019s 00:11:21.816 sys 0m0.066s 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:21.816 ************************************ 00:11:21.816 END TEST filesystem_in_capsule_xfs 00:11:21.816 ************************************ 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:21.816 03:59:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:22.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.749 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:22.749 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:22.749 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:22.749 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 689710 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 689710 ']' 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 689710 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 689710 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 689710' 00:11:23.008 killing process with pid 689710 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 689710 00:11:23.008 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 689710 00:11:23.267 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:23.267 00:11:23.267 real 0m7.055s 00:11:23.267 user 0m27.470s 00:11:23.267 sys 0m1.051s 00:11:23.267 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.267 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.267 ************************************ 00:11:23.267 END TEST nvmf_filesystem_in_capsule 00:11:23.267 ************************************ 00:11:23.267 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:23.267 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:23.267 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:23.267 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:23.267 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:23.267 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:23.267 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.267 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:23.267 rmmod nvme_rdma 00:11:23.526 rmmod nvme_fabrics 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:23.526 00:11:23.526 real 0m20.589s 00:11:23.526 user 0m56.994s 00:11:23.526 sys 0m6.658s 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.526 ************************************ 00:11:23.526 END TEST nvmf_filesystem 00:11:23.526 ************************************ 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:23.526 ************************************ 00:11:23.526 START TEST nvmf_target_discovery 00:11:23.526 ************************************ 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:23.526 * Looking for test storage... 00:11:23.526 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:23.526 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.784 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:23.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.785 --rc genhtml_branch_coverage=1 00:11:23.785 --rc genhtml_function_coverage=1 00:11:23.785 --rc genhtml_legend=1 00:11:23.785 --rc geninfo_all_blocks=1 00:11:23.785 --rc geninfo_unexecuted_blocks=1 00:11:23.785 00:11:23.785 ' 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:23.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.785 --rc genhtml_branch_coverage=1 00:11:23.785 --rc genhtml_function_coverage=1 00:11:23.785 --rc genhtml_legend=1 00:11:23.785 --rc geninfo_all_blocks=1 00:11:23.785 --rc geninfo_unexecuted_blocks=1 00:11:23.785 00:11:23.785 ' 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:23.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.785 --rc genhtml_branch_coverage=1 00:11:23.785 --rc genhtml_function_coverage=1 00:11:23.785 --rc genhtml_legend=1 00:11:23.785 --rc geninfo_all_blocks=1 00:11:23.785 --rc geninfo_unexecuted_blocks=1 00:11:23.785 00:11:23.785 ' 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:23.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.785 --rc genhtml_branch_coverage=1 00:11:23.785 --rc genhtml_function_coverage=1 00:11:23.785 --rc genhtml_legend=1 00:11:23.785 --rc geninfo_all_blocks=1 00:11:23.785 --rc geninfo_unexecuted_blocks=1 00:11:23.785 00:11:23.785 ' 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:23.785 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:23.785 03:59:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:30.352 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:30.352 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:30.352 Found net devices under 0000:18:00.0: mlx_0_0 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:30.352 Found net devices under 0000:18:00.1: mlx_0_1 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:11:30.352 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:30.353 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:30.353 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:30.353 altname enp24s0f0np0 00:11:30.353 altname ens785f0np0 00:11:30.353 inet 192.168.100.8/24 scope global mlx_0_0 00:11:30.353 valid_lft forever preferred_lft forever 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:30.353 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:30.353 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:30.353 altname enp24s0f1np1 00:11:30.353 altname ens785f1np1 00:11:30.353 inet 192.168.100.9/24 scope global mlx_0_1 00:11:30.353 valid_lft forever preferred_lft forever 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:30.353 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:30.354 192.168.100.9' 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:30.354 192.168.100.9' 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:30.354 192.168.100.9' 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=694474 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 694474 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 694474 ']' 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.354 [2024-12-10 03:59:23.708685] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:30.354 [2024-12-10 03:59:23.708735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.354 [2024-12-10 03:59:23.768345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.354 [2024-12-10 03:59:23.808068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.354 [2024-12-10 03:59:23.808107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.354 [2024-12-10 03:59:23.808113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.354 [2024-12-10 03:59:23.808118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.354 [2024-12-10 03:59:23.808124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.354 [2024-12-10 03:59:23.809361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.354 [2024-12-10 03:59:23.809455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.354 [2024-12-10 03:59:23.809512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.354 [2024-12-10 03:59:23.809513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.354 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:30.355 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:30.355 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:30.355 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.355 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.355 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:30.355 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.355 03:59:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.355 [2024-12-10 03:59:23.977103] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18720c0/0x18765b0) succeed. 00:11:30.355 [2024-12-10 03:59:23.985414] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1873750/0x18b7c50) succeed. 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.355 Null1 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.355 [2024-12-10 03:59:24.154968] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.355 Null2 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.355 Null3 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:30.355 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.356 Null4 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.356 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:11:30.356 00:11:30.356 Discovery Log Number of Records 6, Generation counter 6 00:11:30.356 =====Discovery Log Entry 0====== 00:11:30.356 trtype: rdma 00:11:30.356 adrfam: ipv4 00:11:30.356 subtype: current discovery subsystem 00:11:30.356 treq: not required 00:11:30.356 portid: 0 00:11:30.356 trsvcid: 4420 00:11:30.356 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:30.356 traddr: 192.168.100.8 00:11:30.356 eflags: explicit discovery connections, duplicate discovery information 00:11:30.356 rdma_prtype: not specified 00:11:30.356 rdma_qptype: connected 00:11:30.356 rdma_cms: rdma-cm 00:11:30.356 rdma_pkey: 0x0000 00:11:30.356 =====Discovery Log Entry 1====== 00:11:30.356 trtype: rdma 00:11:30.356 adrfam: ipv4 00:11:30.356 subtype: nvme subsystem 00:11:30.356 treq: not required 00:11:30.356 portid: 0 00:11:30.356 trsvcid: 4420 00:11:30.356 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:30.356 traddr: 192.168.100.8 00:11:30.356 eflags: none 00:11:30.356 rdma_prtype: not specified 00:11:30.356 rdma_qptype: connected 00:11:30.356 rdma_cms: rdma-cm 00:11:30.356 rdma_pkey: 0x0000 00:11:30.356 =====Discovery Log Entry 2====== 00:11:30.356 trtype: rdma 00:11:30.356 adrfam: ipv4 00:11:30.356 subtype: nvme subsystem 00:11:30.356 treq: not required 00:11:30.356 portid: 0 00:11:30.356 trsvcid: 4420 00:11:30.356 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:30.356 traddr: 192.168.100.8 00:11:30.356 eflags: none 00:11:30.356 rdma_prtype: not specified 00:11:30.356 rdma_qptype: connected 00:11:30.356 rdma_cms: rdma-cm 00:11:30.356 rdma_pkey: 0x0000 00:11:30.357 =====Discovery Log Entry 3====== 00:11:30.357 trtype: rdma 00:11:30.357 adrfam: ipv4 00:11:30.357 subtype: nvme subsystem 00:11:30.357 treq: not required 00:11:30.357 portid: 0 00:11:30.357 trsvcid: 4420 00:11:30.357 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:30.357 traddr: 192.168.100.8 00:11:30.357 eflags: none 00:11:30.357 rdma_prtype: not specified 00:11:30.357 rdma_qptype: connected 00:11:30.357 rdma_cms: rdma-cm 00:11:30.357 rdma_pkey: 0x0000 00:11:30.357 =====Discovery Log Entry 4====== 00:11:30.357 trtype: rdma 00:11:30.357 adrfam: ipv4 00:11:30.357 subtype: nvme subsystem 00:11:30.357 treq: not required 00:11:30.357 portid: 0 00:11:30.357 trsvcid: 4420 00:11:30.357 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:30.357 traddr: 192.168.100.8 00:11:30.357 eflags: none 00:11:30.357 rdma_prtype: not specified 00:11:30.357 rdma_qptype: connected 00:11:30.357 rdma_cms: rdma-cm 00:11:30.357 rdma_pkey: 0x0000 00:11:30.357 =====Discovery Log Entry 5====== 00:11:30.357 trtype: rdma 00:11:30.357 adrfam: ipv4 00:11:30.357 subtype: discovery subsystem referral 00:11:30.357 treq: not required 00:11:30.357 portid: 0 00:11:30.357 trsvcid: 4430 00:11:30.357 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:30.357 traddr: 192.168.100.8 00:11:30.357 eflags: none 00:11:30.357 rdma_prtype: unrecognized 00:11:30.357 rdma_qptype: unrecognized 00:11:30.357 rdma_cms: unrecognized 00:11:30.357 rdma_pkey: 0x0000 00:11:30.357 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:30.357 Perform nvmf subsystem discovery via RPC 00:11:30.357 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:30.357 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.357 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.357 [ 00:11:30.357 { 00:11:30.357 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:30.357 "subtype": "Discovery", 00:11:30.357 "listen_addresses": [ 00:11:30.357 { 00:11:30.357 "trtype": "RDMA", 00:11:30.357 "adrfam": "IPv4", 00:11:30.357 "traddr": "192.168.100.8", 00:11:30.357 "trsvcid": "4420" 00:11:30.357 } 00:11:30.357 ], 00:11:30.357 "allow_any_host": true, 00:11:30.357 "hosts": [] 00:11:30.357 }, 00:11:30.357 { 00:11:30.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:30.357 "subtype": "NVMe", 00:11:30.357 "listen_addresses": [ 00:11:30.357 { 00:11:30.357 "trtype": "RDMA", 00:11:30.357 "adrfam": "IPv4", 00:11:30.357 "traddr": "192.168.100.8", 00:11:30.357 "trsvcid": "4420" 00:11:30.357 } 00:11:30.357 ], 00:11:30.357 "allow_any_host": true, 00:11:30.357 "hosts": [], 00:11:30.357 "serial_number": "SPDK00000000000001", 00:11:30.357 "model_number": "SPDK bdev Controller", 00:11:30.357 "max_namespaces": 32, 00:11:30.357 "min_cntlid": 1, 00:11:30.357 "max_cntlid": 65519, 00:11:30.357 "namespaces": [ 00:11:30.357 { 00:11:30.357 "nsid": 1, 00:11:30.357 "bdev_name": "Null1", 00:11:30.357 "name": "Null1", 00:11:30.357 "nguid": "6FE266DD9D394C6484519A6AA52F8995", 00:11:30.357 "uuid": "6fe266dd-9d39-4c64-8451-9a6aa52f8995" 00:11:30.357 } 00:11:30.357 ] 00:11:30.357 }, 00:11:30.357 { 00:11:30.357 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:30.357 "subtype": "NVMe", 00:11:30.357 "listen_addresses": [ 00:11:30.357 { 00:11:30.357 "trtype": "RDMA", 00:11:30.357 "adrfam": "IPv4", 00:11:30.357 "traddr": "192.168.100.8", 00:11:30.357 "trsvcid": "4420" 00:11:30.357 } 00:11:30.357 ], 00:11:30.357 "allow_any_host": true, 00:11:30.357 "hosts": [], 00:11:30.357 "serial_number": "SPDK00000000000002", 00:11:30.357 "model_number": "SPDK bdev Controller", 00:11:30.357 "max_namespaces": 32, 00:11:30.357 "min_cntlid": 1, 00:11:30.357 "max_cntlid": 65519, 00:11:30.357 "namespaces": [ 00:11:30.357 { 00:11:30.357 "nsid": 1, 00:11:30.357 "bdev_name": "Null2", 00:11:30.357 "name": "Null2", 00:11:30.357 "nguid": "03CF13617CC74993B7604F0167DCF227", 00:11:30.357 "uuid": "03cf1361-7cc7-4993-b760-4f0167dcf227" 00:11:30.357 } 00:11:30.357 ] 00:11:30.357 }, 00:11:30.357 { 00:11:30.357 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:30.357 "subtype": "NVMe", 00:11:30.357 "listen_addresses": [ 00:11:30.357 { 00:11:30.357 "trtype": "RDMA", 00:11:30.357 "adrfam": "IPv4", 00:11:30.357 "traddr": "192.168.100.8", 00:11:30.357 "trsvcid": "4420" 00:11:30.357 } 00:11:30.357 ], 00:11:30.357 "allow_any_host": true, 00:11:30.357 "hosts": [], 00:11:30.357 "serial_number": "SPDK00000000000003", 00:11:30.357 "model_number": "SPDK bdev Controller", 00:11:30.357 "max_namespaces": 32, 00:11:30.357 "min_cntlid": 1, 00:11:30.357 "max_cntlid": 65519, 00:11:30.357 "namespaces": [ 00:11:30.357 { 00:11:30.357 "nsid": 1, 00:11:30.357 "bdev_name": "Null3", 00:11:30.357 "name": "Null3", 00:11:30.358 "nguid": "9B843DD516BC45ADA4D460129E30C06F", 00:11:30.358 "uuid": "9b843dd5-16bc-45ad-a4d4-60129e30c06f" 00:11:30.358 } 00:11:30.358 ] 00:11:30.358 }, 00:11:30.358 { 00:11:30.358 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:30.358 "subtype": "NVMe", 00:11:30.358 "listen_addresses": [ 00:11:30.358 { 00:11:30.358 "trtype": "RDMA", 00:11:30.358 "adrfam": "IPv4", 00:11:30.358 "traddr": "192.168.100.8", 00:11:30.358 "trsvcid": "4420" 00:11:30.358 } 00:11:30.358 ], 00:11:30.358 "allow_any_host": true, 00:11:30.358 "hosts": [], 00:11:30.358 "serial_number": "SPDK00000000000004", 00:11:30.358 "model_number": "SPDK bdev Controller", 00:11:30.358 "max_namespaces": 32, 00:11:30.358 "min_cntlid": 1, 00:11:30.358 "max_cntlid": 65519, 00:11:30.358 "namespaces": [ 00:11:30.358 { 00:11:30.358 "nsid": 1, 00:11:30.358 "bdev_name": "Null4", 00:11:30.358 "name": "Null4", 00:11:30.358 "nguid": "01B890BCCAC24F0C8F5A424AB54F1DA3", 00:11:30.358 "uuid": "01b890bc-cac2-4f0c-8f5a-424ab54f1da3" 00:11:30.358 } 00:11:30.358 ] 00:11:30.358 } 00:11:30.358 ] 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:30.358 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:30.359 rmmod nvme_rdma 00:11:30.359 rmmod nvme_fabrics 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 694474 ']' 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 694474 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 694474 ']' 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 694474 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 694474 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 694474' 00:11:30.359 killing process with pid 694474 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 694474 00:11:30.359 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 694474 00:11:30.619 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:30.619 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:30.619 00:11:30.619 real 0m7.081s 00:11:30.619 user 0m5.801s 00:11:30.619 sys 0m4.660s 00:11:30.619 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.619 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:30.619 ************************************ 00:11:30.619 END TEST nvmf_target_discovery 00:11:30.619 ************************************ 00:11:30.619 03:59:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:30.619 03:59:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.619 03:59:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.619 03:59:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:30.619 ************************************ 00:11:30.619 START TEST nvmf_referrals 00:11:30.619 ************************************ 00:11:30.619 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:30.619 * Looking for test storage... 00:11:30.619 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:30.619 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:30.619 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:30.619 03:59:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:30.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.878 --rc genhtml_branch_coverage=1 00:11:30.878 --rc genhtml_function_coverage=1 00:11:30.878 --rc genhtml_legend=1 00:11:30.878 --rc geninfo_all_blocks=1 00:11:30.878 --rc geninfo_unexecuted_blocks=1 00:11:30.878 00:11:30.878 ' 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:30.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.878 --rc genhtml_branch_coverage=1 00:11:30.878 --rc genhtml_function_coverage=1 00:11:30.878 --rc genhtml_legend=1 00:11:30.878 --rc geninfo_all_blocks=1 00:11:30.878 --rc geninfo_unexecuted_blocks=1 00:11:30.878 00:11:30.878 ' 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:30.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.878 --rc genhtml_branch_coverage=1 00:11:30.878 --rc genhtml_function_coverage=1 00:11:30.878 --rc genhtml_legend=1 00:11:30.878 --rc geninfo_all_blocks=1 00:11:30.878 --rc geninfo_unexecuted_blocks=1 00:11:30.878 00:11:30.878 ' 00:11:30.878 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:30.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.879 --rc genhtml_branch_coverage=1 00:11:30.879 --rc genhtml_function_coverage=1 00:11:30.879 --rc genhtml_legend=1 00:11:30.879 --rc geninfo_all_blocks=1 00:11:30.879 --rc geninfo_unexecuted_blocks=1 00:11:30.879 00:11:30.879 ' 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:30.879 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:30.879 03:59:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:36.147 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:36.147 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:36.147 Found net devices under 0000:18:00.0: mlx_0_0 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:36.147 Found net devices under 0000:18:00.1: mlx_0_1 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:36.147 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:36.148 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:36.148 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:36.148 altname enp24s0f0np0 00:11:36.148 altname ens785f0np0 00:11:36.148 inet 192.168.100.8/24 scope global mlx_0_0 00:11:36.148 valid_lft forever preferred_lft forever 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:36.148 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:36.148 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:36.148 altname enp24s0f1np1 00:11:36.148 altname ens785f1np1 00:11:36.148 inet 192.168.100.9/24 scope global mlx_0_1 00:11:36.148 valid_lft forever preferred_lft forever 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:36.148 192.168.100.9' 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:36.148 192.168.100.9' 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:36.148 192.168.100.9' 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=697919 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 697919 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.148 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 697919 ']' 00:11:36.149 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.149 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.149 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.149 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.149 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.149 [2024-12-10 03:59:30.350720] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:36.149 [2024-12-10 03:59:30.350762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.149 [2024-12-10 03:59:30.409705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.149 [2024-12-10 03:59:30.449281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.149 [2024-12-10 03:59:30.449319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.149 [2024-12-10 03:59:30.449325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.149 [2024-12-10 03:59:30.449331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.149 [2024-12-10 03:59:30.449336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.149 [2024-12-10 03:59:30.450719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.149 [2024-12-10 03:59:30.450780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.149 [2024-12-10 03:59:30.450864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.149 [2024-12-10 03:59:30.450866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.408 [2024-12-10 03:59:30.605463] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18d30c0/0x18d75b0) succeed. 00:11:36.408 [2024-12-10 03:59:30.613645] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18d4750/0x1918c50) succeed. 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.408 [2024-12-10 03:59:30.739520] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.408 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.666 03:59:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:36.666 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.666 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:36.666 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:36.666 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:36.666 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:36.666 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:36.666 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:36.666 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:36.925 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.183 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:37.441 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:37.700 03:59:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:37.700 rmmod nvme_rdma 00:11:37.700 rmmod nvme_fabrics 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 697919 ']' 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 697919 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 697919 ']' 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 697919 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.700 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 697919 00:11:37.958 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.958 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.958 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 697919' 00:11:37.958 killing process with pid 697919 00:11:37.958 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 697919 00:11:37.958 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 697919 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:38.217 00:11:38.217 real 0m7.452s 00:11:38.217 user 0m9.649s 00:11:38.217 sys 0m4.678s 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.217 ************************************ 00:11:38.217 END TEST nvmf_referrals 00:11:38.217 ************************************ 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:38.217 ************************************ 00:11:38.217 START TEST nvmf_connect_disconnect 00:11:38.217 ************************************ 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:38.217 * Looking for test storage... 00:11:38.217 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:38.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.217 --rc genhtml_branch_coverage=1 00:11:38.217 --rc genhtml_function_coverage=1 00:11:38.217 --rc genhtml_legend=1 00:11:38.217 --rc geninfo_all_blocks=1 00:11:38.217 --rc geninfo_unexecuted_blocks=1 00:11:38.217 00:11:38.217 ' 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:38.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.217 --rc genhtml_branch_coverage=1 00:11:38.217 --rc genhtml_function_coverage=1 00:11:38.217 --rc genhtml_legend=1 00:11:38.217 --rc geninfo_all_blocks=1 00:11:38.217 --rc geninfo_unexecuted_blocks=1 00:11:38.217 00:11:38.217 ' 00:11:38.217 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:38.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.218 --rc genhtml_branch_coverage=1 00:11:38.218 --rc genhtml_function_coverage=1 00:11:38.218 --rc genhtml_legend=1 00:11:38.218 --rc geninfo_all_blocks=1 00:11:38.218 --rc geninfo_unexecuted_blocks=1 00:11:38.218 00:11:38.218 ' 00:11:38.218 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:38.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.218 --rc genhtml_branch_coverage=1 00:11:38.218 --rc genhtml_function_coverage=1 00:11:38.218 --rc genhtml_legend=1 00:11:38.218 --rc geninfo_all_blocks=1 00:11:38.218 --rc geninfo_unexecuted_blocks=1 00:11:38.218 00:11:38.218 ' 00:11:38.218 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.218 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:38.476 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.476 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.476 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.476 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.476 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.476 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.477 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:38.477 03:59:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:43.746 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:43.746 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:43.746 Found net devices under 0000:18:00.0: mlx_0_0 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:43.746 Found net devices under 0000:18:00.1: mlx_0_1 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:43.746 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:43.747 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:43.747 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:43.747 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:43.747 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:43.747 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:43.747 03:59:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:43.747 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:43.747 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:43.747 altname enp24s0f0np0 00:11:43.747 altname ens785f0np0 00:11:43.747 inet 192.168.100.8/24 scope global mlx_0_0 00:11:43.747 valid_lft forever preferred_lft forever 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:43.747 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:43.747 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:43.747 altname enp24s0f1np1 00:11:43.747 altname ens785f1np1 00:11:43.747 inet 192.168.100.9/24 scope global mlx_0_1 00:11:43.747 valid_lft forever preferred_lft forever 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:43.747 192.168.100.9' 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:43.747 192.168.100.9' 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:43.747 192.168.100.9' 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:43.747 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:44.006 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:44.006 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.006 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.006 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.006 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=701625 00:11:44.006 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 701625 00:11:44.006 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.006 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 701625 ']' 00:11:44.006 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.006 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.006 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.006 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.006 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.006 [2024-12-10 03:59:38.194747] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:11:44.006 [2024-12-10 03:59:38.194793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.006 [2024-12-10 03:59:38.255501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.006 [2024-12-10 03:59:38.295178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.006 [2024-12-10 03:59:38.295214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.006 [2024-12-10 03:59:38.295221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.006 [2024-12-10 03:59:38.295232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.006 [2024-12-10 03:59:38.295236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.006 [2024-12-10 03:59:38.296449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.006 [2024-12-10 03:59:38.296547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.006 [2024-12-10 03:59:38.296600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.006 [2024-12-10 03:59:38.296601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.265 [2024-12-10 03:59:38.445187] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:44.265 [2024-12-10 03:59:38.463981] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c530c0/0x1c575b0) succeed. 00:11:44.265 [2024-12-10 03:59:38.472195] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c54750/0x1c98c50) succeed. 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.265 [2024-12-10 03:59:38.614857] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:44.265 03:59:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:48.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:04.373 rmmod nvme_rdma 00:12:04.373 rmmod nvme_fabrics 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 701625 ']' 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 701625 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 701625 ']' 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 701625 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 701625 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 701625' 00:12:04.373 killing process with pid 701625 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 701625 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 701625 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:04.373 00:12:04.373 real 0m26.321s 00:12:04.373 user 1m22.790s 00:12:04.373 sys 0m5.089s 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.373 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:04.373 ************************************ 00:12:04.373 END TEST nvmf_connect_disconnect 00:12:04.373 ************************************ 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:04.632 ************************************ 00:12:04.632 START TEST nvmf_multitarget 00:12:04.632 ************************************ 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:12:04.632 * Looking for test storage... 00:12:04.632 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:04.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.632 --rc genhtml_branch_coverage=1 00:12:04.632 --rc genhtml_function_coverage=1 00:12:04.632 --rc genhtml_legend=1 00:12:04.632 --rc geninfo_all_blocks=1 00:12:04.632 --rc geninfo_unexecuted_blocks=1 00:12:04.632 00:12:04.632 ' 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:04.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.632 --rc genhtml_branch_coverage=1 00:12:04.632 --rc genhtml_function_coverage=1 00:12:04.632 --rc genhtml_legend=1 00:12:04.632 --rc geninfo_all_blocks=1 00:12:04.632 --rc geninfo_unexecuted_blocks=1 00:12:04.632 00:12:04.632 ' 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:04.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.632 --rc genhtml_branch_coverage=1 00:12:04.632 --rc genhtml_function_coverage=1 00:12:04.632 --rc genhtml_legend=1 00:12:04.632 --rc geninfo_all_blocks=1 00:12:04.632 --rc geninfo_unexecuted_blocks=1 00:12:04.632 00:12:04.632 ' 00:12:04.632 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:04.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.632 --rc genhtml_branch_coverage=1 00:12:04.632 --rc genhtml_function_coverage=1 00:12:04.632 --rc genhtml_legend=1 00:12:04.633 --rc geninfo_all_blocks=1 00:12:04.633 --rc geninfo_unexecuted_blocks=1 00:12:04.633 00:12:04.633 ' 00:12:04.633 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.633 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:04.633 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.633 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.633 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.633 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.633 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.633 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.633 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.633 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.633 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.633 03:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:04.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:04.633 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:04.891 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.891 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:04.891 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:04.891 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:04.891 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.891 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.891 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.891 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:04.891 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:04.891 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:04.891 03:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:11.455 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:11.455 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:11.455 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:11.456 Found net devices under 0000:18:00.0: mlx_0_0 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:11.456 Found net devices under 0000:18:00.1: mlx_0_1 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:11.456 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:11.456 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:12:11.456 altname enp24s0f0np0 00:12:11.456 altname ens785f0np0 00:12:11.456 inet 192.168.100.8/24 scope global mlx_0_0 00:12:11.456 valid_lft forever preferred_lft forever 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:11.456 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:11.456 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:12:11.456 altname enp24s0f1np1 00:12:11.456 altname ens785f1np1 00:12:11.456 inet 192.168.100.9/24 scope global mlx_0_1 00:12:11.456 valid_lft forever preferred_lft forever 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:11.456 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:11.457 192.168.100.9' 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:11.457 192.168.100.9' 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:11.457 192.168.100.9' 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=708800 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 708800 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 708800 ']' 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.457 04:00:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:11.457 [2024-12-10 04:00:04.877548] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:11.457 [2024-12-10 04:00:04.877595] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.457 [2024-12-10 04:00:04.938680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.457 [2024-12-10 04:00:04.981336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.457 [2024-12-10 04:00:04.981374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.457 [2024-12-10 04:00:04.981382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.457 [2024-12-10 04:00:04.981388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.457 [2024-12-10 04:00:04.981393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.457 [2024-12-10 04:00:04.982603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.457 [2024-12-10 04:00:04.982698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.457 [2024-12-10 04:00:04.982762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.457 [2024-12-10 04:00:04.982764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:11.457 "nvmf_tgt_1" 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:11.457 "nvmf_tgt_2" 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:11.457 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:11.458 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:11.458 true 00:12:11.458 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:11.458 true 00:12:11.458 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:11.458 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:11.458 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:11.458 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:11.458 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:11.716 rmmod nvme_rdma 00:12:11.716 rmmod nvme_fabrics 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 708800 ']' 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 708800 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 708800 ']' 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 708800 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 708800 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 708800' 00:12:11.716 killing process with pid 708800 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 708800 00:12:11.716 04:00:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 708800 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:11.975 00:12:11.975 real 0m7.289s 00:12:11.975 user 0m6.905s 00:12:11.975 sys 0m4.837s 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:11.975 ************************************ 00:12:11.975 END TEST nvmf_multitarget 00:12:11.975 ************************************ 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:11.975 ************************************ 00:12:11.975 START TEST nvmf_rpc 00:12:11.975 ************************************ 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:11.975 * Looking for test storage... 00:12:11.975 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:11.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.975 --rc genhtml_branch_coverage=1 00:12:11.975 --rc genhtml_function_coverage=1 00:12:11.975 --rc genhtml_legend=1 00:12:11.975 --rc geninfo_all_blocks=1 00:12:11.975 --rc geninfo_unexecuted_blocks=1 00:12:11.975 00:12:11.975 ' 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:11.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.975 --rc genhtml_branch_coverage=1 00:12:11.975 --rc genhtml_function_coverage=1 00:12:11.975 --rc genhtml_legend=1 00:12:11.975 --rc geninfo_all_blocks=1 00:12:11.975 --rc geninfo_unexecuted_blocks=1 00:12:11.975 00:12:11.975 ' 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:11.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.975 --rc genhtml_branch_coverage=1 00:12:11.975 --rc genhtml_function_coverage=1 00:12:11.975 --rc genhtml_legend=1 00:12:11.975 --rc geninfo_all_blocks=1 00:12:11.975 --rc geninfo_unexecuted_blocks=1 00:12:11.975 00:12:11.975 ' 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:11.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.975 --rc genhtml_branch_coverage=1 00:12:11.975 --rc genhtml_function_coverage=1 00:12:11.975 --rc genhtml_legend=1 00:12:11.975 --rc geninfo_all_blocks=1 00:12:11.975 --rc geninfo_unexecuted_blocks=1 00:12:11.975 00:12:11.975 ' 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.975 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:11.976 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:11.976 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.234 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:12.234 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:12.234 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:12.234 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.234 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.234 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.234 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.234 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.234 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.234 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.234 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.234 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:12.234 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.234 04:00:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:18.799 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:18.799 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:18.799 Found net devices under 0000:18:00.0: mlx_0_0 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:18.799 Found net devices under 0000:18:00.1: mlx_0_1 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:18.799 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:18.800 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:18.800 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:12:18.800 altname enp24s0f0np0 00:12:18.800 altname ens785f0np0 00:12:18.800 inet 192.168.100.8/24 scope global mlx_0_0 00:12:18.800 valid_lft forever preferred_lft forever 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:18.800 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:18.800 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:12:18.800 altname enp24s0f1np1 00:12:18.800 altname ens785f1np1 00:12:18.800 inet 192.168.100.9/24 scope global mlx_0_1 00:12:18.800 valid_lft forever preferred_lft forever 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:18.800 192.168.100.9' 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:18.800 192.168.100.9' 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:18.800 192.168.100.9' 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=712848 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 712848 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 712848 ']' 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.800 [2024-12-10 04:00:12.273937] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:18.800 [2024-12-10 04:00:12.273988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.800 [2024-12-10 04:00:12.333530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.800 [2024-12-10 04:00:12.373066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.800 [2024-12-10 04:00:12.373103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.800 [2024-12-10 04:00:12.373109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.800 [2024-12-10 04:00:12.373115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.800 [2024-12-10 04:00:12.373119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.800 [2024-12-10 04:00:12.374323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.800 [2024-12-10 04:00:12.374417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.800 [2024-12-10 04:00:12.374494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.800 [2024-12-10 04:00:12.374495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.800 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:18.801 "tick_rate": 2700000000, 00:12:18.801 "poll_groups": [ 00:12:18.801 { 00:12:18.801 "name": "nvmf_tgt_poll_group_000", 00:12:18.801 "admin_qpairs": 0, 00:12:18.801 "io_qpairs": 0, 00:12:18.801 "current_admin_qpairs": 0, 00:12:18.801 "current_io_qpairs": 0, 00:12:18.801 "pending_bdev_io": 0, 00:12:18.801 "completed_nvme_io": 0, 00:12:18.801 "transports": [] 00:12:18.801 }, 00:12:18.801 { 00:12:18.801 "name": "nvmf_tgt_poll_group_001", 00:12:18.801 "admin_qpairs": 0, 00:12:18.801 "io_qpairs": 0, 00:12:18.801 "current_admin_qpairs": 0, 00:12:18.801 "current_io_qpairs": 0, 00:12:18.801 "pending_bdev_io": 0, 00:12:18.801 "completed_nvme_io": 0, 00:12:18.801 "transports": [] 00:12:18.801 }, 00:12:18.801 { 00:12:18.801 "name": "nvmf_tgt_poll_group_002", 00:12:18.801 "admin_qpairs": 0, 00:12:18.801 "io_qpairs": 0, 00:12:18.801 "current_admin_qpairs": 0, 00:12:18.801 "current_io_qpairs": 0, 00:12:18.801 "pending_bdev_io": 0, 00:12:18.801 "completed_nvme_io": 0, 00:12:18.801 "transports": [] 00:12:18.801 }, 00:12:18.801 { 00:12:18.801 "name": "nvmf_tgt_poll_group_003", 00:12:18.801 "admin_qpairs": 0, 00:12:18.801 "io_qpairs": 0, 00:12:18.801 "current_admin_qpairs": 0, 00:12:18.801 "current_io_qpairs": 0, 00:12:18.801 "pending_bdev_io": 0, 00:12:18.801 "completed_nvme_io": 0, 00:12:18.801 "transports": [] 00:12:18.801 } 00:12:18.801 ] 00:12:18.801 }' 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.801 [2024-12-10 04:00:12.642548] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdff120/0xe03610) succeed. 00:12:18.801 [2024-12-10 04:00:12.650732] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe007b0/0xe44cb0) succeed. 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.801 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:18.801 "tick_rate": 2700000000, 00:12:18.801 "poll_groups": [ 00:12:18.801 { 00:12:18.801 "name": "nvmf_tgt_poll_group_000", 00:12:18.801 "admin_qpairs": 0, 00:12:18.801 "io_qpairs": 0, 00:12:18.801 "current_admin_qpairs": 0, 00:12:18.801 "current_io_qpairs": 0, 00:12:18.801 "pending_bdev_io": 0, 00:12:18.801 "completed_nvme_io": 0, 00:12:18.801 "transports": [ 00:12:18.801 { 00:12:18.801 "trtype": "RDMA", 00:12:18.801 "pending_data_buffer": 0, 00:12:18.801 "devices": [ 00:12:18.801 { 00:12:18.801 "name": "mlx5_0", 00:12:18.801 "polls": 15013, 00:12:18.801 "idle_polls": 15013, 00:12:18.801 "completions": 0, 00:12:18.801 "requests": 0, 00:12:18.801 "request_latency": 0, 00:12:18.801 "pending_free_request": 0, 00:12:18.801 "pending_rdma_read": 0, 00:12:18.801 "pending_rdma_write": 0, 00:12:18.801 "pending_rdma_send": 0, 00:12:18.801 "total_send_wrs": 0, 00:12:18.801 "send_doorbell_updates": 0, 00:12:18.801 "total_recv_wrs": 4096, 00:12:18.801 "recv_doorbell_updates": 1 00:12:18.801 }, 00:12:18.801 { 00:12:18.801 "name": "mlx5_1", 00:12:18.801 "polls": 15013, 00:12:18.801 "idle_polls": 15013, 00:12:18.801 "completions": 0, 00:12:18.801 "requests": 0, 00:12:18.801 "request_latency": 0, 00:12:18.801 "pending_free_request": 0, 00:12:18.801 "pending_rdma_read": 0, 00:12:18.801 "pending_rdma_write": 0, 00:12:18.801 "pending_rdma_send": 0, 00:12:18.801 "total_send_wrs": 0, 00:12:18.801 "send_doorbell_updates": 0, 00:12:18.801 "total_recv_wrs": 4096, 00:12:18.801 "recv_doorbell_updates": 1 00:12:18.801 } 00:12:18.801 ] 00:12:18.801 } 00:12:18.801 ] 00:12:18.801 }, 00:12:18.801 { 00:12:18.801 "name": "nvmf_tgt_poll_group_001", 00:12:18.801 "admin_qpairs": 0, 00:12:18.801 "io_qpairs": 0, 00:12:18.801 "current_admin_qpairs": 0, 00:12:18.801 "current_io_qpairs": 0, 00:12:18.801 "pending_bdev_io": 0, 00:12:18.801 "completed_nvme_io": 0, 00:12:18.801 "transports": [ 00:12:18.801 { 00:12:18.801 "trtype": "RDMA", 00:12:18.801 "pending_data_buffer": 0, 00:12:18.801 "devices": [ 00:12:18.801 { 00:12:18.801 "name": "mlx5_0", 00:12:18.801 "polls": 9432, 00:12:18.801 "idle_polls": 9432, 00:12:18.801 "completions": 0, 00:12:18.801 "requests": 0, 00:12:18.801 "request_latency": 0, 00:12:18.801 "pending_free_request": 0, 00:12:18.801 "pending_rdma_read": 0, 00:12:18.801 "pending_rdma_write": 0, 00:12:18.801 "pending_rdma_send": 0, 00:12:18.801 "total_send_wrs": 0, 00:12:18.801 "send_doorbell_updates": 0, 00:12:18.801 "total_recv_wrs": 4096, 00:12:18.801 "recv_doorbell_updates": 1 00:12:18.801 }, 00:12:18.801 { 00:12:18.801 "name": "mlx5_1", 00:12:18.801 "polls": 9432, 00:12:18.801 "idle_polls": 9432, 00:12:18.801 "completions": 0, 00:12:18.801 "requests": 0, 00:12:18.801 "request_latency": 0, 00:12:18.801 "pending_free_request": 0, 00:12:18.801 "pending_rdma_read": 0, 00:12:18.801 "pending_rdma_write": 0, 00:12:18.801 "pending_rdma_send": 0, 00:12:18.801 "total_send_wrs": 0, 00:12:18.801 "send_doorbell_updates": 0, 00:12:18.801 "total_recv_wrs": 4096, 00:12:18.801 "recv_doorbell_updates": 1 00:12:18.801 } 00:12:18.801 ] 00:12:18.801 } 00:12:18.801 ] 00:12:18.801 }, 00:12:18.801 { 00:12:18.801 "name": "nvmf_tgt_poll_group_002", 00:12:18.801 "admin_qpairs": 0, 00:12:18.801 "io_qpairs": 0, 00:12:18.801 "current_admin_qpairs": 0, 00:12:18.801 "current_io_qpairs": 0, 00:12:18.801 "pending_bdev_io": 0, 00:12:18.801 "completed_nvme_io": 0, 00:12:18.801 "transports": [ 00:12:18.801 { 00:12:18.801 "trtype": "RDMA", 00:12:18.801 "pending_data_buffer": 0, 00:12:18.801 "devices": [ 00:12:18.801 { 00:12:18.801 "name": "mlx5_0", 00:12:18.801 "polls": 5180, 00:12:18.801 "idle_polls": 5180, 00:12:18.801 "completions": 0, 00:12:18.801 "requests": 0, 00:12:18.801 "request_latency": 0, 00:12:18.801 "pending_free_request": 0, 00:12:18.801 "pending_rdma_read": 0, 00:12:18.801 "pending_rdma_write": 0, 00:12:18.801 "pending_rdma_send": 0, 00:12:18.801 "total_send_wrs": 0, 00:12:18.801 "send_doorbell_updates": 0, 00:12:18.801 "total_recv_wrs": 4096, 00:12:18.801 "recv_doorbell_updates": 1 00:12:18.801 }, 00:12:18.801 { 00:12:18.801 "name": "mlx5_1", 00:12:18.801 "polls": 5180, 00:12:18.801 "idle_polls": 5180, 00:12:18.801 "completions": 0, 00:12:18.801 "requests": 0, 00:12:18.801 "request_latency": 0, 00:12:18.801 "pending_free_request": 0, 00:12:18.801 "pending_rdma_read": 0, 00:12:18.801 "pending_rdma_write": 0, 00:12:18.801 "pending_rdma_send": 0, 00:12:18.801 "total_send_wrs": 0, 00:12:18.801 "send_doorbell_updates": 0, 00:12:18.801 "total_recv_wrs": 4096, 00:12:18.801 "recv_doorbell_updates": 1 00:12:18.801 } 00:12:18.801 ] 00:12:18.801 } 00:12:18.801 ] 00:12:18.801 }, 00:12:18.801 { 00:12:18.801 "name": "nvmf_tgt_poll_group_003", 00:12:18.801 "admin_qpairs": 0, 00:12:18.801 "io_qpairs": 0, 00:12:18.801 "current_admin_qpairs": 0, 00:12:18.801 "current_io_qpairs": 0, 00:12:18.801 "pending_bdev_io": 0, 00:12:18.801 "completed_nvme_io": 0, 00:12:18.801 "transports": [ 00:12:18.801 { 00:12:18.801 "trtype": "RDMA", 00:12:18.801 "pending_data_buffer": 0, 00:12:18.801 "devices": [ 00:12:18.801 { 00:12:18.801 "name": "mlx5_0", 00:12:18.801 "polls": 918, 00:12:18.801 "idle_polls": 918, 00:12:18.801 "completions": 0, 00:12:18.801 "requests": 0, 00:12:18.801 "request_latency": 0, 00:12:18.801 "pending_free_request": 0, 00:12:18.801 "pending_rdma_read": 0, 00:12:18.801 "pending_rdma_write": 0, 00:12:18.801 "pending_rdma_send": 0, 00:12:18.801 "total_send_wrs": 0, 00:12:18.801 "send_doorbell_updates": 0, 00:12:18.801 "total_recv_wrs": 4096, 00:12:18.801 "recv_doorbell_updates": 1 00:12:18.801 }, 00:12:18.801 { 00:12:18.801 "name": "mlx5_1", 00:12:18.801 "polls": 918, 00:12:18.801 "idle_polls": 918, 00:12:18.801 "completions": 0, 00:12:18.801 "requests": 0, 00:12:18.801 "request_latency": 0, 00:12:18.801 "pending_free_request": 0, 00:12:18.801 "pending_rdma_read": 0, 00:12:18.801 "pending_rdma_write": 0, 00:12:18.801 "pending_rdma_send": 0, 00:12:18.801 "total_send_wrs": 0, 00:12:18.801 "send_doorbell_updates": 0, 00:12:18.801 "total_recv_wrs": 4096, 00:12:18.801 "recv_doorbell_updates": 1 00:12:18.801 } 00:12:18.801 ] 00:12:18.802 } 00:12:18.802 ] 00:12:18.802 } 00:12:18.802 ] 00:12:18.802 }' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:12:18.802 04:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.802 Malloc1 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.802 [2024-12-10 04:00:13.065248] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:18.802 [2024-12-10 04:00:13.115278] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562' 00:12:18.802 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:18.802 could not add new controller: failed to write to nvme-fabrics device 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.802 04:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:20.177 04:00:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.177 04:00:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:20.177 04:00:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.177 04:00:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:20.177 04:00:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:22.076 04:00:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:22.076 04:00:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:22.076 04:00:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.076 04:00:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:22.076 04:00:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.076 04:00:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:22.076 04:00:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:23.010 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:23.010 [2024-12-10 04:00:17.156764] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562' 00:12:23.010 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:23.011 could not add new controller: failed to write to nvme-fabrics device 00:12:23.011 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:23.011 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:23.011 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:23.011 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:23.011 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:23.011 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.011 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.011 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.011 04:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:23.944 04:00:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.944 04:00:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:23.944 04:00:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.944 04:00:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:23.944 04:00:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:25.844 04:00:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:25.844 04:00:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:25.844 04:00:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.844 04:00:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:25.844 04:00:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.844 04:00:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:25.844 04:00:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.777 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.777 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:26.777 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:26.777 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.777 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:26.777 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.777 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:26.777 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.777 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.777 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.036 [2024-12-10 04:00:21.188819] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.036 04:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:27.968 04:00:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.968 04:00:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:27.968 04:00:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.968 04:00:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:27.968 04:00:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:29.868 04:00:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:29.868 04:00:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:29.868 04:00:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.868 04:00:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:29.868 04:00:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.868 04:00:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:29.868 04:00:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.802 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.802 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:30.802 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:30.802 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.802 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:30.802 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.802 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:30.802 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.802 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.802 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.802 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.802 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.802 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.802 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.060 [2024-12-10 04:00:25.200365] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.060 04:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:31.995 04:00:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.995 04:00:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:31.995 04:00:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.995 04:00:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:31.995 04:00:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:33.898 04:00:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:33.898 04:00:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:33.898 04:00:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.898 04:00:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:33.898 04:00:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.898 04:00:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:33.898 04:00:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.834 [2024-12-10 04:00:29.208885] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.834 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.093 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.093 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.093 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.093 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.093 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.093 04:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:36.028 04:00:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.028 04:00:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:36.028 04:00:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.028 04:00:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:36.028 04:00:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:37.930 04:00:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:37.930 04:00:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:37.930 04:00:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.930 04:00:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:37.930 04:00:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.930 04:00:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:37.930 04:00:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.865 [2024-12-10 04:00:33.216174] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.865 04:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:40.240 04:00:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.240 04:00:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:40.240 04:00:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.240 04:00:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:40.240 04:00:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:42.142 04:00:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:42.142 04:00:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:42.142 04:00:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.142 04:00:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:42.142 04:00:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.142 04:00:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:42.142 04:00:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.078 [2024-12-10 04:00:37.228627] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.078 04:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:44.013 04:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.013 04:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:44.013 04:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.013 04:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:44.013 04:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:45.914 04:00:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:45.914 04:00:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:45.914 04:00:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.914 04:00:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:45.914 04:00:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.914 04:00:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:45.914 04:00:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.849 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.849 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:46.849 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:46.849 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.849 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:46.849 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.849 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:46.849 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.849 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.849 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.849 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.849 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.849 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.849 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.108 [2024-12-10 04:00:41.267241] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.108 [2024-12-10 04:00:41.315514] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.108 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 [2024-12-10 04:00:41.363683] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 [2024-12-10 04:00:41.411830] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.109 [2024-12-10 04:00:41.459991] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.109 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.110 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.110 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.110 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.110 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.110 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.110 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.110 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.110 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.110 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.110 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.110 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.110 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.110 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.110 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:47.369 "tick_rate": 2700000000, 00:12:47.369 "poll_groups": [ 00:12:47.369 { 00:12:47.369 "name": "nvmf_tgt_poll_group_000", 00:12:47.369 "admin_qpairs": 2, 00:12:47.369 "io_qpairs": 27, 00:12:47.369 "current_admin_qpairs": 0, 00:12:47.369 "current_io_qpairs": 0, 00:12:47.369 "pending_bdev_io": 0, 00:12:47.369 "completed_nvme_io": 126, 00:12:47.369 "transports": [ 00:12:47.369 { 00:12:47.369 "trtype": "RDMA", 00:12:47.369 "pending_data_buffer": 0, 00:12:47.369 "devices": [ 00:12:47.369 { 00:12:47.369 "name": "mlx5_0", 00:12:47.369 "polls": 3623038, 00:12:47.369 "idle_polls": 3622709, 00:12:47.369 "completions": 367, 00:12:47.369 "requests": 183, 00:12:47.369 "request_latency": 37358556, 00:12:47.369 "pending_free_request": 0, 00:12:47.369 "pending_rdma_read": 0, 00:12:47.369 "pending_rdma_write": 0, 00:12:47.369 "pending_rdma_send": 0, 00:12:47.369 "total_send_wrs": 309, 00:12:47.369 "send_doorbell_updates": 161, 00:12:47.369 "total_recv_wrs": 4279, 00:12:47.369 "recv_doorbell_updates": 161 00:12:47.369 }, 00:12:47.369 { 00:12:47.369 "name": "mlx5_1", 00:12:47.369 "polls": 3623038, 00:12:47.369 "idle_polls": 3623038, 00:12:47.369 "completions": 0, 00:12:47.369 "requests": 0, 00:12:47.369 "request_latency": 0, 00:12:47.369 "pending_free_request": 0, 00:12:47.369 "pending_rdma_read": 0, 00:12:47.369 "pending_rdma_write": 0, 00:12:47.369 "pending_rdma_send": 0, 00:12:47.369 "total_send_wrs": 0, 00:12:47.369 "send_doorbell_updates": 0, 00:12:47.369 "total_recv_wrs": 4096, 00:12:47.369 "recv_doorbell_updates": 1 00:12:47.369 } 00:12:47.369 ] 00:12:47.369 } 00:12:47.369 ] 00:12:47.369 }, 00:12:47.369 { 00:12:47.369 "name": "nvmf_tgt_poll_group_001", 00:12:47.369 "admin_qpairs": 2, 00:12:47.369 "io_qpairs": 26, 00:12:47.369 "current_admin_qpairs": 0, 00:12:47.369 "current_io_qpairs": 0, 00:12:47.369 "pending_bdev_io": 0, 00:12:47.369 "completed_nvme_io": 174, 00:12:47.369 "transports": [ 00:12:47.369 { 00:12:47.369 "trtype": "RDMA", 00:12:47.369 "pending_data_buffer": 0, 00:12:47.369 "devices": [ 00:12:47.369 { 00:12:47.369 "name": "mlx5_0", 00:12:47.369 "polls": 3476571, 00:12:47.369 "idle_polls": 3476172, 00:12:47.369 "completions": 458, 00:12:47.369 "requests": 229, 00:12:47.369 "request_latency": 50975594, 00:12:47.369 "pending_free_request": 0, 00:12:47.369 "pending_rdma_read": 0, 00:12:47.369 "pending_rdma_write": 0, 00:12:47.369 "pending_rdma_send": 0, 00:12:47.369 "total_send_wrs": 403, 00:12:47.369 "send_doorbell_updates": 197, 00:12:47.369 "total_recv_wrs": 4325, 00:12:47.369 "recv_doorbell_updates": 198 00:12:47.369 }, 00:12:47.369 { 00:12:47.369 "name": "mlx5_1", 00:12:47.369 "polls": 3476571, 00:12:47.369 "idle_polls": 3476571, 00:12:47.369 "completions": 0, 00:12:47.369 "requests": 0, 00:12:47.369 "request_latency": 0, 00:12:47.369 "pending_free_request": 0, 00:12:47.369 "pending_rdma_read": 0, 00:12:47.369 "pending_rdma_write": 0, 00:12:47.369 "pending_rdma_send": 0, 00:12:47.369 "total_send_wrs": 0, 00:12:47.369 "send_doorbell_updates": 0, 00:12:47.369 "total_recv_wrs": 4096, 00:12:47.369 "recv_doorbell_updates": 1 00:12:47.369 } 00:12:47.369 ] 00:12:47.369 } 00:12:47.369 ] 00:12:47.369 }, 00:12:47.369 { 00:12:47.369 "name": "nvmf_tgt_poll_group_002", 00:12:47.369 "admin_qpairs": 1, 00:12:47.369 "io_qpairs": 26, 00:12:47.369 "current_admin_qpairs": 0, 00:12:47.369 "current_io_qpairs": 0, 00:12:47.369 "pending_bdev_io": 0, 00:12:47.369 "completed_nvme_io": 79, 00:12:47.369 "transports": [ 00:12:47.369 { 00:12:47.369 "trtype": "RDMA", 00:12:47.369 "pending_data_buffer": 0, 00:12:47.369 "devices": [ 00:12:47.369 { 00:12:47.369 "name": "mlx5_0", 00:12:47.369 "polls": 3542462, 00:12:47.369 "idle_polls": 3542266, 00:12:47.369 "completions": 217, 00:12:47.369 "requests": 108, 00:12:47.369 "request_latency": 21570632, 00:12:47.369 "pending_free_request": 0, 00:12:47.369 "pending_rdma_read": 0, 00:12:47.369 "pending_rdma_write": 0, 00:12:47.369 "pending_rdma_send": 0, 00:12:47.369 "total_send_wrs": 175, 00:12:47.369 "send_doorbell_updates": 96, 00:12:47.369 "total_recv_wrs": 4204, 00:12:47.369 "recv_doorbell_updates": 96 00:12:47.369 }, 00:12:47.369 { 00:12:47.369 "name": "mlx5_1", 00:12:47.369 "polls": 3542462, 00:12:47.369 "idle_polls": 3542462, 00:12:47.369 "completions": 0, 00:12:47.369 "requests": 0, 00:12:47.369 "request_latency": 0, 00:12:47.369 "pending_free_request": 0, 00:12:47.369 "pending_rdma_read": 0, 00:12:47.369 "pending_rdma_write": 0, 00:12:47.369 "pending_rdma_send": 0, 00:12:47.369 "total_send_wrs": 0, 00:12:47.369 "send_doorbell_updates": 0, 00:12:47.369 "total_recv_wrs": 4096, 00:12:47.369 "recv_doorbell_updates": 1 00:12:47.369 } 00:12:47.369 ] 00:12:47.369 } 00:12:47.369 ] 00:12:47.369 }, 00:12:47.369 { 00:12:47.369 "name": "nvmf_tgt_poll_group_003", 00:12:47.369 "admin_qpairs": 2, 00:12:47.369 "io_qpairs": 26, 00:12:47.369 "current_admin_qpairs": 0, 00:12:47.369 "current_io_qpairs": 0, 00:12:47.369 "pending_bdev_io": 0, 00:12:47.369 "completed_nvme_io": 76, 00:12:47.369 "transports": [ 00:12:47.369 { 00:12:47.369 "trtype": "RDMA", 00:12:47.369 "pending_data_buffer": 0, 00:12:47.369 "devices": [ 00:12:47.369 { 00:12:47.369 "name": "mlx5_0", 00:12:47.369 "polls": 2849573, 00:12:47.369 "idle_polls": 2849336, 00:12:47.369 "completions": 258, 00:12:47.369 "requests": 129, 00:12:47.369 "request_latency": 24712064, 00:12:47.369 "pending_free_request": 0, 00:12:47.369 "pending_rdma_read": 0, 00:12:47.369 "pending_rdma_write": 0, 00:12:47.369 "pending_rdma_send": 0, 00:12:47.369 "total_send_wrs": 203, 00:12:47.369 "send_doorbell_updates": 116, 00:12:47.369 "total_recv_wrs": 4225, 00:12:47.369 "recv_doorbell_updates": 117 00:12:47.369 }, 00:12:47.369 { 00:12:47.369 "name": "mlx5_1", 00:12:47.369 "polls": 2849573, 00:12:47.369 "idle_polls": 2849573, 00:12:47.369 "completions": 0, 00:12:47.369 "requests": 0, 00:12:47.369 "request_latency": 0, 00:12:47.369 "pending_free_request": 0, 00:12:47.369 "pending_rdma_read": 0, 00:12:47.369 "pending_rdma_write": 0, 00:12:47.369 "pending_rdma_send": 0, 00:12:47.369 "total_send_wrs": 0, 00:12:47.369 "send_doorbell_updates": 0, 00:12:47.369 "total_recv_wrs": 4096, 00:12:47.369 "recv_doorbell_updates": 1 00:12:47.369 } 00:12:47.369 ] 00:12:47.369 } 00:12:47.369 ] 00:12:47.369 } 00:12:47.369 ] 00:12:47.369 }' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1300 > 0 )) 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 134616846 > 0 )) 00:12:47.369 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:47.370 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:47.370 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:47.370 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:47.370 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:47.370 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:47.370 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:47.370 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:47.370 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:47.370 rmmod nvme_rdma 00:12:47.370 rmmod nvme_fabrics 00:12:47.370 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:47.629 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:47.629 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:47.629 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 712848 ']' 00:12:47.629 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 712848 00:12:47.629 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 712848 ']' 00:12:47.629 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 712848 00:12:47.629 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:47.629 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.629 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 712848 00:12:47.629 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.629 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.629 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 712848' 00:12:47.629 killing process with pid 712848 00:12:47.629 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 712848 00:12:47.629 04:00:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 712848 00:12:47.888 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:47.888 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:47.888 00:12:47.888 real 0m35.888s 00:12:47.888 user 2m0.128s 00:12:47.888 sys 0m5.943s 00:12:47.888 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.888 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.888 ************************************ 00:12:47.888 END TEST nvmf_rpc 00:12:47.888 ************************************ 00:12:47.888 04:00:42 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:12:47.888 04:00:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:47.888 04:00:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.888 04:00:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.888 ************************************ 00:12:47.888 START TEST nvmf_invalid 00:12:47.888 ************************************ 00:12:47.888 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:12:47.888 * Looking for test storage... 00:12:47.888 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:47.888 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:47.888 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:47.888 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:48.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.148 --rc genhtml_branch_coverage=1 00:12:48.148 --rc genhtml_function_coverage=1 00:12:48.148 --rc genhtml_legend=1 00:12:48.148 --rc geninfo_all_blocks=1 00:12:48.148 --rc geninfo_unexecuted_blocks=1 00:12:48.148 00:12:48.148 ' 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:48.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.148 --rc genhtml_branch_coverage=1 00:12:48.148 --rc genhtml_function_coverage=1 00:12:48.148 --rc genhtml_legend=1 00:12:48.148 --rc geninfo_all_blocks=1 00:12:48.148 --rc geninfo_unexecuted_blocks=1 00:12:48.148 00:12:48.148 ' 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:48.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.148 --rc genhtml_branch_coverage=1 00:12:48.148 --rc genhtml_function_coverage=1 00:12:48.148 --rc genhtml_legend=1 00:12:48.148 --rc geninfo_all_blocks=1 00:12:48.148 --rc geninfo_unexecuted_blocks=1 00:12:48.148 00:12:48.148 ' 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:48.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.148 --rc genhtml_branch_coverage=1 00:12:48.148 --rc genhtml_function_coverage=1 00:12:48.148 --rc genhtml_legend=1 00:12:48.148 --rc geninfo_all_blocks=1 00:12:48.148 --rc geninfo_unexecuted_blocks=1 00:12:48.148 00:12:48.148 ' 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.148 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:48.149 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:48.149 04:00:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:53.422 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:53.422 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:53.423 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:53.423 Found net devices under 0000:18:00.0: mlx_0_0 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:53.423 Found net devices under 0000:18:00.1: mlx_0_1 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:53.423 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:53.683 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:53.683 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:12:53.683 altname enp24s0f0np0 00:12:53.683 altname ens785f0np0 00:12:53.683 inet 192.168.100.8/24 scope global mlx_0_0 00:12:53.683 valid_lft forever preferred_lft forever 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:53.683 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:53.683 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:12:53.683 altname enp24s0f1np1 00:12:53.683 altname ens785f1np1 00:12:53.683 inet 192.168.100.9/24 scope global mlx_0_1 00:12:53.683 valid_lft forever preferred_lft forever 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:53.683 192.168.100.9' 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:53.683 192.168.100.9' 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:53.683 192.168.100.9' 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=721671 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 721671 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 721671 ']' 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.683 04:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:53.683 [2024-12-10 04:00:47.980381] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:12:53.683 [2024-12-10 04:00:47.980423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.683 [2024-12-10 04:00:48.038576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.943 [2024-12-10 04:00:48.077819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.943 [2024-12-10 04:00:48.077852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.943 [2024-12-10 04:00:48.077859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.943 [2024-12-10 04:00:48.077867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.943 [2024-12-10 04:00:48.077872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.943 [2024-12-10 04:00:48.079062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.943 [2024-12-10 04:00:48.079160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.943 [2024-12-10 04:00:48.079218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.943 [2024-12-10 04:00:48.079220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.943 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.943 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:53.943 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:53.943 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:53.943 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:53.943 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.943 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:53.943 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31034 00:12:54.201 [2024-12-10 04:00:48.372140] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:54.201 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:54.201 { 00:12:54.201 "nqn": "nqn.2016-06.io.spdk:cnode31034", 00:12:54.201 "tgt_name": "foobar", 00:12:54.201 "method": "nvmf_create_subsystem", 00:12:54.201 "req_id": 1 00:12:54.201 } 00:12:54.201 Got JSON-RPC error response 00:12:54.201 response: 00:12:54.201 { 00:12:54.201 "code": -32603, 00:12:54.201 "message": "Unable to find target foobar" 00:12:54.201 }' 00:12:54.201 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:54.201 { 00:12:54.201 "nqn": "nqn.2016-06.io.spdk:cnode31034", 00:12:54.201 "tgt_name": "foobar", 00:12:54.201 "method": "nvmf_create_subsystem", 00:12:54.201 "req_id": 1 00:12:54.201 } 00:12:54.201 Got JSON-RPC error response 00:12:54.201 response: 00:12:54.201 { 00:12:54.201 "code": -32603, 00:12:54.201 "message": "Unable to find target foobar" 00:12:54.201 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:54.201 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:54.201 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2278 00:12:54.201 [2024-12-10 04:00:48.564779] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2278: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:54.460 { 00:12:54.460 "nqn": "nqn.2016-06.io.spdk:cnode2278", 00:12:54.460 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:54.460 "method": "nvmf_create_subsystem", 00:12:54.460 "req_id": 1 00:12:54.460 } 00:12:54.460 Got JSON-RPC error response 00:12:54.460 response: 00:12:54.460 { 00:12:54.460 "code": -32602, 00:12:54.460 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:54.460 }' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:54.460 { 00:12:54.460 "nqn": "nqn.2016-06.io.spdk:cnode2278", 00:12:54.460 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:54.460 "method": "nvmf_create_subsystem", 00:12:54.460 "req_id": 1 00:12:54.460 } 00:12:54.460 Got JSON-RPC error response 00:12:54.460 response: 00:12:54.460 { 00:12:54.460 "code": -32602, 00:12:54.460 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:54.460 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29733 00:12:54.460 [2024-12-10 04:00:48.757384] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29733: invalid model number 'SPDK_Controller' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:54.460 { 00:12:54.460 "nqn": "nqn.2016-06.io.spdk:cnode29733", 00:12:54.460 "model_number": "SPDK_Controller\u001f", 00:12:54.460 "method": "nvmf_create_subsystem", 00:12:54.460 "req_id": 1 00:12:54.460 } 00:12:54.460 Got JSON-RPC error response 00:12:54.460 response: 00:12:54.460 { 00:12:54.460 "code": -32602, 00:12:54.460 "message": "Invalid MN SPDK_Controller\u001f" 00:12:54.460 }' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:54.460 { 00:12:54.460 "nqn": "nqn.2016-06.io.spdk:cnode29733", 00:12:54.460 "model_number": "SPDK_Controller\u001f", 00:12:54.460 "method": "nvmf_create_subsystem", 00:12:54.460 "req_id": 1 00:12:54.460 } 00:12:54.460 Got JSON-RPC error response 00:12:54.460 response: 00:12:54.460 { 00:12:54.460 "code": -32602, 00:12:54.460 "message": "Invalid MN SPDK_Controller\u001f" 00:12:54.460 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.460 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ + == \- ]] 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '+2?;q}3x-QcW_7a.R[HZD' 00:12:54.719 04:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '+2?;q}3x-QcW_7a.R[HZD' nqn.2016-06.io.spdk:cnode12007 00:12:54.719 [2024-12-10 04:00:49.082431] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12007: invalid serial number '+2?;q}3x-QcW_7a.R[HZD' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:54.979 { 00:12:54.979 "nqn": "nqn.2016-06.io.spdk:cnode12007", 00:12:54.979 "serial_number": "+2?;q}3x-QcW_7a.R[HZD", 00:12:54.979 "method": "nvmf_create_subsystem", 00:12:54.979 "req_id": 1 00:12:54.979 } 00:12:54.979 Got JSON-RPC error response 00:12:54.979 response: 00:12:54.979 { 00:12:54.979 "code": -32602, 00:12:54.979 "message": "Invalid SN +2?;q}3x-QcW_7a.R[HZD" 00:12:54.979 }' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:54.979 { 00:12:54.979 "nqn": "nqn.2016-06.io.spdk:cnode12007", 00:12:54.979 "serial_number": "+2?;q}3x-QcW_7a.R[HZD", 00:12:54.979 "method": "nvmf_create_subsystem", 00:12:54.979 "req_id": 1 00:12:54.979 } 00:12:54.979 Got JSON-RPC error response 00:12:54.979 response: 00:12:54.979 { 00:12:54.979 "code": -32602, 00:12:54.979 "message": "Invalid SN +2?;q}3x-QcW_7a.R[HZD" 00:12:54.979 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:54.979 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.980 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ j == \- ]] 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'jFv.A|r0"'\''i]Qvqj2=75wu"|@&5~uIwp\c>m{8ds\' 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'jFv.A|r0"'\''i]Qvqj2=75wu"|@&5~uIwp\c>m{8ds\' nqn.2016-06.io.spdk:cnode12967 00:12:55.238 [2024-12-10 04:00:49.543945] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12967: invalid model number 'jFv.A|r0"'i]Qvqj2=75wu"|@&5~uIwp\c>m{8ds\' 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:55.238 { 00:12:55.238 "nqn": "nqn.2016-06.io.spdk:cnode12967", 00:12:55.238 "model_number": "jFv.A|r0\"'\''i]Qvqj2=75wu\"|@&5~uIwp\\c>m{8ds\\", 00:12:55.238 "method": "nvmf_create_subsystem", 00:12:55.238 "req_id": 1 00:12:55.238 } 00:12:55.238 Got JSON-RPC error response 00:12:55.238 response: 00:12:55.238 { 00:12:55.238 "code": -32602, 00:12:55.238 "message": "Invalid MN jFv.A|r0\"'\''i]Qvqj2=75wu\"|@&5~uIwp\\c>m{8ds\\" 00:12:55.238 }' 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:55.238 { 00:12:55.238 "nqn": "nqn.2016-06.io.spdk:cnode12967", 00:12:55.238 "model_number": "jFv.A|r0\"'i]Qvqj2=75wu\"|@&5~uIwp\\c>m{8ds\\", 00:12:55.238 "method": "nvmf_create_subsystem", 00:12:55.238 "req_id": 1 00:12:55.238 } 00:12:55.238 Got JSON-RPC error response 00:12:55.238 response: 00:12:55.238 { 00:12:55.238 "code": -32602, 00:12:55.238 "message": "Invalid MN jFv.A|r0\"'i]Qvqj2=75wu\"|@&5~uIwp\\c>m{8ds\\" 00:12:55.238 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:55.238 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:12:55.497 [2024-12-10 04:00:49.751470] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbd99e0/0xbdded0) succeed. 00:12:55.497 [2024-12-10 04:00:49.759537] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbdb070/0xc1f570) succeed. 00:12:55.755 04:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:55.755 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:12:55.755 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:12:55.755 192.168.100.9' 00:12:55.755 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:55.755 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:12:55.755 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:12:56.014 [2024-12-10 04:00:50.263938] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:56.014 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:56.014 { 00:12:56.014 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:56.014 "listen_address": { 00:12:56.014 "trtype": "rdma", 00:12:56.014 "traddr": "192.168.100.8", 00:12:56.014 "trsvcid": "4421" 00:12:56.014 }, 00:12:56.014 "method": "nvmf_subsystem_remove_listener", 00:12:56.014 "req_id": 1 00:12:56.014 } 00:12:56.014 Got JSON-RPC error response 00:12:56.014 response: 00:12:56.014 { 00:12:56.014 "code": -32602, 00:12:56.014 "message": "Invalid parameters" 00:12:56.014 }' 00:12:56.014 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:56.014 { 00:12:56.014 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:56.014 "listen_address": { 00:12:56.014 "trtype": "rdma", 00:12:56.014 "traddr": "192.168.100.8", 00:12:56.014 "trsvcid": "4421" 00:12:56.014 }, 00:12:56.014 "method": "nvmf_subsystem_remove_listener", 00:12:56.014 "req_id": 1 00:12:56.014 } 00:12:56.014 Got JSON-RPC error response 00:12:56.014 response: 00:12:56.014 { 00:12:56.014 "code": -32602, 00:12:56.014 "message": "Invalid parameters" 00:12:56.014 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:56.014 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6283 -i 0 00:12:56.292 [2024-12-10 04:00:50.456628] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6283: invalid cntlid range [0-65519] 00:12:56.292 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:56.292 { 00:12:56.292 "nqn": "nqn.2016-06.io.spdk:cnode6283", 00:12:56.292 "min_cntlid": 0, 00:12:56.292 "method": "nvmf_create_subsystem", 00:12:56.292 "req_id": 1 00:12:56.292 } 00:12:56.292 Got JSON-RPC error response 00:12:56.292 response: 00:12:56.292 { 00:12:56.292 "code": -32602, 00:12:56.292 "message": "Invalid cntlid range [0-65519]" 00:12:56.292 }' 00:12:56.292 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:56.292 { 00:12:56.292 "nqn": "nqn.2016-06.io.spdk:cnode6283", 00:12:56.292 "min_cntlid": 0, 00:12:56.292 "method": "nvmf_create_subsystem", 00:12:56.292 "req_id": 1 00:12:56.292 } 00:12:56.292 Got JSON-RPC error response 00:12:56.292 response: 00:12:56.292 { 00:12:56.292 "code": -32602, 00:12:56.292 "message": "Invalid cntlid range [0-65519]" 00:12:56.292 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.292 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15236 -i 65520 00:12:56.292 [2024-12-10 04:00:50.649256] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15236: invalid cntlid range [65520-65519] 00:12:56.590 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:56.590 { 00:12:56.590 "nqn": "nqn.2016-06.io.spdk:cnode15236", 00:12:56.590 "min_cntlid": 65520, 00:12:56.590 "method": "nvmf_create_subsystem", 00:12:56.590 "req_id": 1 00:12:56.590 } 00:12:56.590 Got JSON-RPC error response 00:12:56.590 response: 00:12:56.590 { 00:12:56.590 "code": -32602, 00:12:56.590 "message": "Invalid cntlid range [65520-65519]" 00:12:56.590 }' 00:12:56.590 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:56.590 { 00:12:56.590 "nqn": "nqn.2016-06.io.spdk:cnode15236", 00:12:56.590 "min_cntlid": 65520, 00:12:56.590 "method": "nvmf_create_subsystem", 00:12:56.590 "req_id": 1 00:12:56.590 } 00:12:56.590 Got JSON-RPC error response 00:12:56.590 response: 00:12:56.590 { 00:12:56.590 "code": -32602, 00:12:56.590 "message": "Invalid cntlid range [65520-65519]" 00:12:56.590 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.590 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2456 -I 0 00:12:56.591 [2024-12-10 04:00:50.837908] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2456: invalid cntlid range [1-0] 00:12:56.591 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:56.591 { 00:12:56.591 "nqn": "nqn.2016-06.io.spdk:cnode2456", 00:12:56.591 "max_cntlid": 0, 00:12:56.591 "method": "nvmf_create_subsystem", 00:12:56.591 "req_id": 1 00:12:56.591 } 00:12:56.591 Got JSON-RPC error response 00:12:56.591 response: 00:12:56.591 { 00:12:56.591 "code": -32602, 00:12:56.591 "message": "Invalid cntlid range [1-0]" 00:12:56.591 }' 00:12:56.591 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:56.591 { 00:12:56.591 "nqn": "nqn.2016-06.io.spdk:cnode2456", 00:12:56.591 "max_cntlid": 0, 00:12:56.591 "method": "nvmf_create_subsystem", 00:12:56.591 "req_id": 1 00:12:56.591 } 00:12:56.591 Got JSON-RPC error response 00:12:56.591 response: 00:12:56.591 { 00:12:56.591 "code": -32602, 00:12:56.591 "message": "Invalid cntlid range [1-0]" 00:12:56.591 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.591 04:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10014 -I 65520 00:12:56.878 [2024-12-10 04:00:51.030582] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10014: invalid cntlid range [1-65520] 00:12:56.878 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:56.878 { 00:12:56.878 "nqn": "nqn.2016-06.io.spdk:cnode10014", 00:12:56.878 "max_cntlid": 65520, 00:12:56.878 "method": "nvmf_create_subsystem", 00:12:56.878 "req_id": 1 00:12:56.878 } 00:12:56.878 Got JSON-RPC error response 00:12:56.878 response: 00:12:56.878 { 00:12:56.878 "code": -32602, 00:12:56.878 "message": "Invalid cntlid range [1-65520]" 00:12:56.878 }' 00:12:56.878 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:56.878 { 00:12:56.878 "nqn": "nqn.2016-06.io.spdk:cnode10014", 00:12:56.878 "max_cntlid": 65520, 00:12:56.878 "method": "nvmf_create_subsystem", 00:12:56.878 "req_id": 1 00:12:56.878 } 00:12:56.878 Got JSON-RPC error response 00:12:56.878 response: 00:12:56.878 { 00:12:56.878 "code": -32602, 00:12:56.878 "message": "Invalid cntlid range [1-65520]" 00:12:56.878 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.878 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18657 -i 6 -I 5 00:12:56.878 [2024-12-10 04:00:51.215220] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18657: invalid cntlid range [6-5] 00:12:56.878 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:56.878 { 00:12:56.878 "nqn": "nqn.2016-06.io.spdk:cnode18657", 00:12:56.878 "min_cntlid": 6, 00:12:56.878 "max_cntlid": 5, 00:12:56.878 "method": "nvmf_create_subsystem", 00:12:56.878 "req_id": 1 00:12:56.878 } 00:12:56.878 Got JSON-RPC error response 00:12:56.878 response: 00:12:56.878 { 00:12:56.878 "code": -32602, 00:12:56.878 "message": "Invalid cntlid range [6-5]" 00:12:56.878 }' 00:12:56.878 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:56.878 { 00:12:56.878 "nqn": "nqn.2016-06.io.spdk:cnode18657", 00:12:56.878 "min_cntlid": 6, 00:12:56.878 "max_cntlid": 5, 00:12:56.878 "method": "nvmf_create_subsystem", 00:12:56.878 "req_id": 1 00:12:56.878 } 00:12:56.878 Got JSON-RPC error response 00:12:56.878 response: 00:12:56.878 { 00:12:56.878 "code": -32602, 00:12:56.878 "message": "Invalid cntlid range [6-5]" 00:12:56.878 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.878 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:57.136 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:57.136 { 00:12:57.136 "name": "foobar", 00:12:57.136 "method": "nvmf_delete_target", 00:12:57.136 "req_id": 1 00:12:57.136 } 00:12:57.136 Got JSON-RPC error response 00:12:57.136 response: 00:12:57.136 { 00:12:57.136 "code": -32602, 00:12:57.136 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:57.136 }' 00:12:57.136 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:57.136 { 00:12:57.136 "name": "foobar", 00:12:57.136 "method": "nvmf_delete_target", 00:12:57.136 "req_id": 1 00:12:57.136 } 00:12:57.136 Got JSON-RPC error response 00:12:57.136 response: 00:12:57.136 { 00:12:57.136 "code": -32602, 00:12:57.136 "message": "The specified target doesn't exist, cannot delete it." 00:12:57.136 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:57.137 rmmod nvme_rdma 00:12:57.137 rmmod nvme_fabrics 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 721671 ']' 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 721671 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 721671 ']' 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 721671 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 721671 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 721671' 00:12:57.137 killing process with pid 721671 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 721671 00:12:57.137 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 721671 00:12:57.395 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:57.395 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:57.395 00:12:57.395 real 0m9.529s 00:12:57.395 user 0m18.111s 00:12:57.395 sys 0m5.214s 00:12:57.395 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.395 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:57.395 ************************************ 00:12:57.395 END TEST nvmf_invalid 00:12:57.395 ************************************ 00:12:57.395 04:00:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:12:57.395 04:00:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:57.395 04:00:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.395 04:00:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:57.395 ************************************ 00:12:57.395 START TEST nvmf_connect_stress 00:12:57.395 ************************************ 00:12:57.395 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:12:57.655 * Looking for test storage... 00:12:57.655 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:57.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.655 --rc genhtml_branch_coverage=1 00:12:57.655 --rc genhtml_function_coverage=1 00:12:57.655 --rc genhtml_legend=1 00:12:57.655 --rc geninfo_all_blocks=1 00:12:57.655 --rc geninfo_unexecuted_blocks=1 00:12:57.655 00:12:57.655 ' 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:57.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.655 --rc genhtml_branch_coverage=1 00:12:57.655 --rc genhtml_function_coverage=1 00:12:57.655 --rc genhtml_legend=1 00:12:57.655 --rc geninfo_all_blocks=1 00:12:57.655 --rc geninfo_unexecuted_blocks=1 00:12:57.655 00:12:57.655 ' 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:57.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.655 --rc genhtml_branch_coverage=1 00:12:57.655 --rc genhtml_function_coverage=1 00:12:57.655 --rc genhtml_legend=1 00:12:57.655 --rc geninfo_all_blocks=1 00:12:57.655 --rc geninfo_unexecuted_blocks=1 00:12:57.655 00:12:57.655 ' 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:57.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.655 --rc genhtml_branch_coverage=1 00:12:57.655 --rc genhtml_function_coverage=1 00:12:57.655 --rc genhtml_legend=1 00:12:57.655 --rc geninfo_all_blocks=1 00:12:57.655 --rc geninfo_unexecuted_blocks=1 00:12:57.655 00:12:57.655 ' 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:57.655 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:57.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:57.656 04:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.924 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.924 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:02.924 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:02.924 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:02.925 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:02.925 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:02.925 Found net devices under 0000:18:00.0: mlx_0_0 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:02.925 Found net devices under 0000:18:00.1: mlx_0_1 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:02.925 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:03.185 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:03.185 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:13:03.185 altname enp24s0f0np0 00:13:03.185 altname ens785f0np0 00:13:03.185 inet 192.168.100.8/24 scope global mlx_0_0 00:13:03.185 valid_lft forever preferred_lft forever 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:03.185 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:03.185 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:13:03.185 altname enp24s0f1np1 00:13:03.185 altname ens785f1np1 00:13:03.185 inet 192.168.100.9/24 scope global mlx_0_1 00:13:03.185 valid_lft forever preferred_lft forever 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:03.185 192.168.100.9' 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:03.185 192.168.100.9' 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:03.185 192.168.100.9' 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=725766 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 725766 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 725766 ']' 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.185 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:03.185 [2024-12-10 04:00:57.489123] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:03.185 [2024-12-10 04:00:57.489175] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.185 [2024-12-10 04:00:57.547885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:03.444 [2024-12-10 04:00:57.588576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.444 [2024-12-10 04:00:57.588606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.444 [2024-12-10 04:00:57.588612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.444 [2024-12-10 04:00:57.588618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.444 [2024-12-10 04:00:57.588622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.444 [2024-12-10 04:00:57.589890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.444 [2024-12-10 04:00:57.589975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.444 [2024-12-10 04:00:57.589976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.444 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.444 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:03.444 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:03.444 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:03.444 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.444 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.444 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:03.444 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.444 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.444 [2024-12-10 04:00:57.744305] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x892800/0x896cf0) succeed. 00:13:03.444 [2024-12-10 04:00:57.752510] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x893df0/0x8d8390) succeed. 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.702 [2024-12-10 04:00:57.857515] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.702 NULL1 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=725791 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.702 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.703 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.703 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.703 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.703 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.703 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.703 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.703 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:03.703 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:03.703 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.703 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.703 04:00:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.961 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.961 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:03.961 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.961 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.961 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.526 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.526 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:04.526 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.526 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.526 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.783 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.783 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:04.783 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.783 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.783 04:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.040 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.040 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:05.040 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.040 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.040 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.297 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.297 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:05.297 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.297 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.297 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.555 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.555 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:05.555 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.555 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.555 04:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.120 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.120 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:06.120 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.120 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.120 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.377 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.377 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:06.377 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.377 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.377 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.635 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.635 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:06.635 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.635 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.635 04:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.892 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.892 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:06.892 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.892 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.892 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.456 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.456 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:07.456 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.456 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.456 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.713 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.713 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:07.713 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.713 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.713 04:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.971 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.971 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:07.971 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.971 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.971 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.228 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.228 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:08.228 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.228 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.228 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.486 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.486 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:08.486 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.486 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.486 04:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.051 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.051 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:09.051 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.051 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.051 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.309 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.309 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:09.309 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.309 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.309 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.566 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.566 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:09.566 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.566 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.566 04:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.824 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.824 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:09.824 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.824 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.824 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.395 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.395 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:10.395 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.395 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.395 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.652 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.652 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:10.652 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.652 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.652 04:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.910 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.910 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:10.910 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.910 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.910 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.168 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.168 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:11.168 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.168 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.168 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.733 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.733 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:11.733 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.733 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.733 04:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.991 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.991 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:11.991 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.991 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.991 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.248 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.248 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:12.248 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.248 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.248 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.506 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.506 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:12.506 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.506 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.506 04:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.764 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.764 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:12.764 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.764 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.764 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.327 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.327 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:13.327 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.327 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.327 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.584 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.584 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:13.584 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.584 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.584 04:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.843 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 725791 00:13:13.843 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (725791) - No such process 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 725791 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:13.843 rmmod nvme_rdma 00:13:13.843 rmmod nvme_fabrics 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 725766 ']' 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 725766 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 725766 ']' 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 725766 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 725766 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 725766' 00:13:13.843 killing process with pid 725766 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 725766 00:13:13.843 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 725766 00:13:14.101 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:14.101 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:14.101 00:13:14.101 real 0m16.692s 00:13:14.101 user 0m39.991s 00:13:14.101 sys 0m6.075s 00:13:14.101 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.101 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.101 ************************************ 00:13:14.101 END TEST nvmf_connect_stress 00:13:14.101 ************************************ 00:13:14.101 04:01:08 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:14.101 04:01:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.101 04:01:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.101 04:01:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.101 ************************************ 00:13:14.101 START TEST nvmf_fused_ordering 00:13:14.101 ************************************ 00:13:14.101 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:14.360 * Looking for test storage... 00:13:14.360 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:14.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.360 --rc genhtml_branch_coverage=1 00:13:14.360 --rc genhtml_function_coverage=1 00:13:14.360 --rc genhtml_legend=1 00:13:14.360 --rc geninfo_all_blocks=1 00:13:14.360 --rc geninfo_unexecuted_blocks=1 00:13:14.360 00:13:14.360 ' 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:14.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.360 --rc genhtml_branch_coverage=1 00:13:14.360 --rc genhtml_function_coverage=1 00:13:14.360 --rc genhtml_legend=1 00:13:14.360 --rc geninfo_all_blocks=1 00:13:14.360 --rc geninfo_unexecuted_blocks=1 00:13:14.360 00:13:14.360 ' 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:14.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.360 --rc genhtml_branch_coverage=1 00:13:14.360 --rc genhtml_function_coverage=1 00:13:14.360 --rc genhtml_legend=1 00:13:14.360 --rc geninfo_all_blocks=1 00:13:14.360 --rc geninfo_unexecuted_blocks=1 00:13:14.360 00:13:14.360 ' 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:14.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.360 --rc genhtml_branch_coverage=1 00:13:14.360 --rc genhtml_function_coverage=1 00:13:14.360 --rc genhtml_legend=1 00:13:14.360 --rc geninfo_all_blocks=1 00:13:14.360 --rc geninfo_unexecuted_blocks=1 00:13:14.360 00:13:14.360 ' 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.360 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.361 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:14.361 04:01:08 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.923 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:20.924 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:20.924 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:20.924 Found net devices under 0000:18:00.0: mlx_0_0 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:20.924 Found net devices under 0000:18:00.1: mlx_0_1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:20.924 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:20.924 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:13:20.924 altname enp24s0f0np0 00:13:20.924 altname ens785f0np0 00:13:20.924 inet 192.168.100.8/24 scope global mlx_0_0 00:13:20.924 valid_lft forever preferred_lft forever 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:20.924 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:20.924 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:13:20.924 altname enp24s0f1np1 00:13:20.924 altname ens785f1np1 00:13:20.924 inet 192.168.100.9/24 scope global mlx_0_1 00:13:20.924 valid_lft forever preferred_lft forever 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:20.924 192.168.100.9' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:20.924 192.168.100.9' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:20.924 192.168.100.9' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=731199 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 731199 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 731199 ']' 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.924 [2024-12-10 04:01:14.491348] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:20.924 [2024-12-10 04:01:14.491402] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.924 [2024-12-10 04:01:14.549686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.924 [2024-12-10 04:01:14.589743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.924 [2024-12-10 04:01:14.589774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.924 [2024-12-10 04:01:14.589781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.924 [2024-12-10 04:01:14.589786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.924 [2024-12-10 04:01:14.589791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.924 [2024-12-10 04:01:14.590262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.924 [2024-12-10 04:01:14.741554] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ee60c0/0x1eea5b0) succeed. 00:13:20.924 [2024-12-10 04:01:14.749550] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ee7570/0x1f2bc50) succeed. 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.924 [2024-12-10 04:01:14.796608] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:20.924 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.925 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:20.925 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.925 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.925 NULL1 00:13:20.925 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.925 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:20.925 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.925 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.925 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.925 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:20.925 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.925 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:20.925 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.925 04:01:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:20.925 [2024-12-10 04:01:14.853829] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:20.925 [2024-12-10 04:01:14.853859] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid731252 ] 00:13:20.925 Attached to nqn.2016-06.io.spdk:cnode1 00:13:20.925 Namespace ID: 1 size: 1GB 00:13:20.925 fused_ordering(0) 00:13:20.925 fused_ordering(1) 00:13:20.925 fused_ordering(2) 00:13:20.925 fused_ordering(3) 00:13:20.925 fused_ordering(4) 00:13:20.925 fused_ordering(5) 00:13:20.925 fused_ordering(6) 00:13:20.925 fused_ordering(7) 00:13:20.925 fused_ordering(8) 00:13:20.925 fused_ordering(9) 00:13:20.925 fused_ordering(10) 00:13:20.925 fused_ordering(11) 00:13:20.925 fused_ordering(12) 00:13:20.925 fused_ordering(13) 00:13:20.925 fused_ordering(14) 00:13:20.925 fused_ordering(15) 00:13:20.925 fused_ordering(16) 00:13:20.925 fused_ordering(17) 00:13:20.925 fused_ordering(18) 00:13:20.925 fused_ordering(19) 00:13:20.925 fused_ordering(20) 00:13:20.925 fused_ordering(21) 00:13:20.925 fused_ordering(22) 00:13:20.925 fused_ordering(23) 00:13:20.925 fused_ordering(24) 00:13:20.925 fused_ordering(25) 00:13:20.925 fused_ordering(26) 00:13:20.925 fused_ordering(27) 00:13:20.925 fused_ordering(28) 00:13:20.925 fused_ordering(29) 00:13:20.925 fused_ordering(30) 00:13:20.925 fused_ordering(31) 00:13:20.925 fused_ordering(32) 00:13:20.925 fused_ordering(33) 00:13:20.925 fused_ordering(34) 00:13:20.925 fused_ordering(35) 00:13:20.925 fused_ordering(36) 00:13:20.925 fused_ordering(37) 00:13:20.925 fused_ordering(38) 00:13:20.925 fused_ordering(39) 00:13:20.925 fused_ordering(40) 00:13:20.925 fused_ordering(41) 00:13:20.925 fused_ordering(42) 00:13:20.925 fused_ordering(43) 00:13:20.925 fused_ordering(44) 00:13:20.925 fused_ordering(45) 00:13:20.925 fused_ordering(46) 00:13:20.925 fused_ordering(47) 00:13:20.925 fused_ordering(48) 00:13:20.925 fused_ordering(49) 00:13:20.925 fused_ordering(50) 00:13:20.925 fused_ordering(51) 00:13:20.925 fused_ordering(52) 00:13:20.925 fused_ordering(53) 00:13:20.925 fused_ordering(54) 00:13:20.925 fused_ordering(55) 00:13:20.925 fused_ordering(56) 00:13:20.925 fused_ordering(57) 00:13:20.925 fused_ordering(58) 00:13:20.925 fused_ordering(59) 00:13:20.925 fused_ordering(60) 00:13:20.925 fused_ordering(61) 00:13:20.925 fused_ordering(62) 00:13:20.925 fused_ordering(63) 00:13:20.925 fused_ordering(64) 00:13:20.925 fused_ordering(65) 00:13:20.925 fused_ordering(66) 00:13:20.925 fused_ordering(67) 00:13:20.925 fused_ordering(68) 00:13:20.925 fused_ordering(69) 00:13:20.925 fused_ordering(70) 00:13:20.925 fused_ordering(71) 00:13:20.925 fused_ordering(72) 00:13:20.925 fused_ordering(73) 00:13:20.925 fused_ordering(74) 00:13:20.925 fused_ordering(75) 00:13:20.925 fused_ordering(76) 00:13:20.925 fused_ordering(77) 00:13:20.925 fused_ordering(78) 00:13:20.925 fused_ordering(79) 00:13:20.925 fused_ordering(80) 00:13:20.925 fused_ordering(81) 00:13:20.925 fused_ordering(82) 00:13:20.925 fused_ordering(83) 00:13:20.925 fused_ordering(84) 00:13:20.925 fused_ordering(85) 00:13:20.925 fused_ordering(86) 00:13:20.925 fused_ordering(87) 00:13:20.925 fused_ordering(88) 00:13:20.925 fused_ordering(89) 00:13:20.925 fused_ordering(90) 00:13:20.925 fused_ordering(91) 00:13:20.925 fused_ordering(92) 00:13:20.925 fused_ordering(93) 00:13:20.925 fused_ordering(94) 00:13:20.925 fused_ordering(95) 00:13:20.925 fused_ordering(96) 00:13:20.925 fused_ordering(97) 00:13:20.925 fused_ordering(98) 00:13:20.925 fused_ordering(99) 00:13:20.925 fused_ordering(100) 00:13:20.925 fused_ordering(101) 00:13:20.925 fused_ordering(102) 00:13:20.925 fused_ordering(103) 00:13:20.925 fused_ordering(104) 00:13:20.925 fused_ordering(105) 00:13:20.925 fused_ordering(106) 00:13:20.925 fused_ordering(107) 00:13:20.925 fused_ordering(108) 00:13:20.925 fused_ordering(109) 00:13:20.925 fused_ordering(110) 00:13:20.925 fused_ordering(111) 00:13:20.925 fused_ordering(112) 00:13:20.925 fused_ordering(113) 00:13:20.925 fused_ordering(114) 00:13:20.925 fused_ordering(115) 00:13:20.925 fused_ordering(116) 00:13:20.925 fused_ordering(117) 00:13:20.925 fused_ordering(118) 00:13:20.925 fused_ordering(119) 00:13:20.925 fused_ordering(120) 00:13:20.925 fused_ordering(121) 00:13:20.925 fused_ordering(122) 00:13:20.925 fused_ordering(123) 00:13:20.925 fused_ordering(124) 00:13:20.925 fused_ordering(125) 00:13:20.925 fused_ordering(126) 00:13:20.925 fused_ordering(127) 00:13:20.925 fused_ordering(128) 00:13:20.925 fused_ordering(129) 00:13:20.925 fused_ordering(130) 00:13:20.925 fused_ordering(131) 00:13:20.925 fused_ordering(132) 00:13:20.925 fused_ordering(133) 00:13:20.925 fused_ordering(134) 00:13:20.925 fused_ordering(135) 00:13:20.925 fused_ordering(136) 00:13:20.925 fused_ordering(137) 00:13:20.925 fused_ordering(138) 00:13:20.925 fused_ordering(139) 00:13:20.925 fused_ordering(140) 00:13:20.925 fused_ordering(141) 00:13:20.925 fused_ordering(142) 00:13:20.925 fused_ordering(143) 00:13:20.925 fused_ordering(144) 00:13:20.925 fused_ordering(145) 00:13:20.925 fused_ordering(146) 00:13:20.925 fused_ordering(147) 00:13:20.925 fused_ordering(148) 00:13:20.925 fused_ordering(149) 00:13:20.925 fused_ordering(150) 00:13:20.925 fused_ordering(151) 00:13:20.925 fused_ordering(152) 00:13:20.925 fused_ordering(153) 00:13:20.925 fused_ordering(154) 00:13:20.925 fused_ordering(155) 00:13:20.925 fused_ordering(156) 00:13:20.925 fused_ordering(157) 00:13:20.925 fused_ordering(158) 00:13:20.925 fused_ordering(159) 00:13:20.925 fused_ordering(160) 00:13:20.925 fused_ordering(161) 00:13:20.925 fused_ordering(162) 00:13:20.925 fused_ordering(163) 00:13:20.925 fused_ordering(164) 00:13:20.925 fused_ordering(165) 00:13:20.925 fused_ordering(166) 00:13:20.925 fused_ordering(167) 00:13:20.925 fused_ordering(168) 00:13:20.925 fused_ordering(169) 00:13:20.925 fused_ordering(170) 00:13:20.925 fused_ordering(171) 00:13:20.925 fused_ordering(172) 00:13:20.925 fused_ordering(173) 00:13:20.925 fused_ordering(174) 00:13:20.925 fused_ordering(175) 00:13:20.925 fused_ordering(176) 00:13:20.925 fused_ordering(177) 00:13:20.925 fused_ordering(178) 00:13:20.925 fused_ordering(179) 00:13:20.925 fused_ordering(180) 00:13:20.925 fused_ordering(181) 00:13:20.925 fused_ordering(182) 00:13:20.925 fused_ordering(183) 00:13:20.925 fused_ordering(184) 00:13:20.925 fused_ordering(185) 00:13:20.925 fused_ordering(186) 00:13:20.925 fused_ordering(187) 00:13:20.925 fused_ordering(188) 00:13:20.925 fused_ordering(189) 00:13:20.925 fused_ordering(190) 00:13:20.925 fused_ordering(191) 00:13:20.925 fused_ordering(192) 00:13:20.925 fused_ordering(193) 00:13:20.925 fused_ordering(194) 00:13:20.925 fused_ordering(195) 00:13:20.925 fused_ordering(196) 00:13:20.925 fused_ordering(197) 00:13:20.925 fused_ordering(198) 00:13:20.925 fused_ordering(199) 00:13:20.925 fused_ordering(200) 00:13:20.925 fused_ordering(201) 00:13:20.925 fused_ordering(202) 00:13:20.925 fused_ordering(203) 00:13:20.925 fused_ordering(204) 00:13:20.925 fused_ordering(205) 00:13:20.925 fused_ordering(206) 00:13:20.925 fused_ordering(207) 00:13:20.925 fused_ordering(208) 00:13:20.925 fused_ordering(209) 00:13:20.925 fused_ordering(210) 00:13:20.925 fused_ordering(211) 00:13:20.925 fused_ordering(212) 00:13:20.925 fused_ordering(213) 00:13:20.925 fused_ordering(214) 00:13:20.925 fused_ordering(215) 00:13:20.925 fused_ordering(216) 00:13:20.925 fused_ordering(217) 00:13:20.925 fused_ordering(218) 00:13:20.925 fused_ordering(219) 00:13:20.925 fused_ordering(220) 00:13:20.925 fused_ordering(221) 00:13:20.925 fused_ordering(222) 00:13:20.925 fused_ordering(223) 00:13:20.925 fused_ordering(224) 00:13:20.925 fused_ordering(225) 00:13:20.925 fused_ordering(226) 00:13:20.925 fused_ordering(227) 00:13:20.925 fused_ordering(228) 00:13:20.925 fused_ordering(229) 00:13:20.925 fused_ordering(230) 00:13:20.925 fused_ordering(231) 00:13:20.925 fused_ordering(232) 00:13:20.925 fused_ordering(233) 00:13:20.925 fused_ordering(234) 00:13:20.925 fused_ordering(235) 00:13:20.925 fused_ordering(236) 00:13:20.925 fused_ordering(237) 00:13:20.925 fused_ordering(238) 00:13:20.925 fused_ordering(239) 00:13:20.925 fused_ordering(240) 00:13:20.925 fused_ordering(241) 00:13:20.925 fused_ordering(242) 00:13:20.925 fused_ordering(243) 00:13:20.925 fused_ordering(244) 00:13:20.925 fused_ordering(245) 00:13:20.925 fused_ordering(246) 00:13:20.925 fused_ordering(247) 00:13:20.925 fused_ordering(248) 00:13:20.925 fused_ordering(249) 00:13:20.925 fused_ordering(250) 00:13:20.925 fused_ordering(251) 00:13:20.925 fused_ordering(252) 00:13:20.925 fused_ordering(253) 00:13:20.925 fused_ordering(254) 00:13:20.925 fused_ordering(255) 00:13:20.925 fused_ordering(256) 00:13:20.925 fused_ordering(257) 00:13:20.925 fused_ordering(258) 00:13:20.925 fused_ordering(259) 00:13:20.925 fused_ordering(260) 00:13:20.925 fused_ordering(261) 00:13:20.925 fused_ordering(262) 00:13:20.925 fused_ordering(263) 00:13:20.925 fused_ordering(264) 00:13:20.925 fused_ordering(265) 00:13:20.925 fused_ordering(266) 00:13:20.925 fused_ordering(267) 00:13:20.925 fused_ordering(268) 00:13:20.925 fused_ordering(269) 00:13:20.925 fused_ordering(270) 00:13:20.925 fused_ordering(271) 00:13:20.925 fused_ordering(272) 00:13:20.925 fused_ordering(273) 00:13:20.925 fused_ordering(274) 00:13:20.925 fused_ordering(275) 00:13:20.925 fused_ordering(276) 00:13:20.925 fused_ordering(277) 00:13:20.925 fused_ordering(278) 00:13:20.925 fused_ordering(279) 00:13:20.925 fused_ordering(280) 00:13:20.925 fused_ordering(281) 00:13:20.925 fused_ordering(282) 00:13:20.925 fused_ordering(283) 00:13:20.925 fused_ordering(284) 00:13:20.925 fused_ordering(285) 00:13:20.925 fused_ordering(286) 00:13:20.925 fused_ordering(287) 00:13:20.925 fused_ordering(288) 00:13:20.925 fused_ordering(289) 00:13:20.925 fused_ordering(290) 00:13:20.925 fused_ordering(291) 00:13:20.925 fused_ordering(292) 00:13:20.925 fused_ordering(293) 00:13:20.925 fused_ordering(294) 00:13:20.925 fused_ordering(295) 00:13:20.925 fused_ordering(296) 00:13:20.925 fused_ordering(297) 00:13:20.925 fused_ordering(298) 00:13:20.925 fused_ordering(299) 00:13:20.925 fused_ordering(300) 00:13:20.925 fused_ordering(301) 00:13:20.925 fused_ordering(302) 00:13:20.925 fused_ordering(303) 00:13:20.925 fused_ordering(304) 00:13:20.925 fused_ordering(305) 00:13:20.925 fused_ordering(306) 00:13:20.925 fused_ordering(307) 00:13:20.925 fused_ordering(308) 00:13:20.925 fused_ordering(309) 00:13:20.925 fused_ordering(310) 00:13:20.925 fused_ordering(311) 00:13:20.925 fused_ordering(312) 00:13:20.925 fused_ordering(313) 00:13:20.925 fused_ordering(314) 00:13:20.925 fused_ordering(315) 00:13:20.925 fused_ordering(316) 00:13:20.925 fused_ordering(317) 00:13:20.925 fused_ordering(318) 00:13:20.925 fused_ordering(319) 00:13:20.925 fused_ordering(320) 00:13:20.925 fused_ordering(321) 00:13:20.925 fused_ordering(322) 00:13:20.925 fused_ordering(323) 00:13:20.925 fused_ordering(324) 00:13:20.925 fused_ordering(325) 00:13:20.925 fused_ordering(326) 00:13:20.925 fused_ordering(327) 00:13:20.925 fused_ordering(328) 00:13:20.925 fused_ordering(329) 00:13:20.925 fused_ordering(330) 00:13:20.925 fused_ordering(331) 00:13:20.925 fused_ordering(332) 00:13:20.925 fused_ordering(333) 00:13:20.925 fused_ordering(334) 00:13:20.925 fused_ordering(335) 00:13:20.925 fused_ordering(336) 00:13:20.925 fused_ordering(337) 00:13:20.925 fused_ordering(338) 00:13:20.925 fused_ordering(339) 00:13:20.925 fused_ordering(340) 00:13:20.925 fused_ordering(341) 00:13:20.925 fused_ordering(342) 00:13:20.925 fused_ordering(343) 00:13:20.925 fused_ordering(344) 00:13:20.925 fused_ordering(345) 00:13:20.925 fused_ordering(346) 00:13:20.925 fused_ordering(347) 00:13:20.925 fused_ordering(348) 00:13:20.925 fused_ordering(349) 00:13:20.925 fused_ordering(350) 00:13:20.925 fused_ordering(351) 00:13:20.925 fused_ordering(352) 00:13:20.925 fused_ordering(353) 00:13:20.925 fused_ordering(354) 00:13:20.925 fused_ordering(355) 00:13:20.925 fused_ordering(356) 00:13:20.925 fused_ordering(357) 00:13:20.925 fused_ordering(358) 00:13:20.925 fused_ordering(359) 00:13:20.925 fused_ordering(360) 00:13:20.925 fused_ordering(361) 00:13:20.925 fused_ordering(362) 00:13:20.925 fused_ordering(363) 00:13:20.925 fused_ordering(364) 00:13:20.925 fused_ordering(365) 00:13:20.925 fused_ordering(366) 00:13:20.925 fused_ordering(367) 00:13:20.925 fused_ordering(368) 00:13:20.925 fused_ordering(369) 00:13:20.925 fused_ordering(370) 00:13:20.925 fused_ordering(371) 00:13:20.925 fused_ordering(372) 00:13:20.925 fused_ordering(373) 00:13:20.925 fused_ordering(374) 00:13:20.925 fused_ordering(375) 00:13:20.925 fused_ordering(376) 00:13:20.925 fused_ordering(377) 00:13:20.925 fused_ordering(378) 00:13:20.925 fused_ordering(379) 00:13:20.925 fused_ordering(380) 00:13:20.925 fused_ordering(381) 00:13:20.925 fused_ordering(382) 00:13:20.925 fused_ordering(383) 00:13:20.925 fused_ordering(384) 00:13:20.925 fused_ordering(385) 00:13:20.925 fused_ordering(386) 00:13:20.925 fused_ordering(387) 00:13:20.925 fused_ordering(388) 00:13:20.925 fused_ordering(389) 00:13:20.925 fused_ordering(390) 00:13:20.925 fused_ordering(391) 00:13:20.925 fused_ordering(392) 00:13:20.925 fused_ordering(393) 00:13:20.925 fused_ordering(394) 00:13:20.925 fused_ordering(395) 00:13:20.925 fused_ordering(396) 00:13:20.925 fused_ordering(397) 00:13:20.925 fused_ordering(398) 00:13:20.925 fused_ordering(399) 00:13:20.925 fused_ordering(400) 00:13:20.925 fused_ordering(401) 00:13:20.925 fused_ordering(402) 00:13:20.925 fused_ordering(403) 00:13:20.925 fused_ordering(404) 00:13:20.925 fused_ordering(405) 00:13:20.925 fused_ordering(406) 00:13:20.925 fused_ordering(407) 00:13:20.926 fused_ordering(408) 00:13:20.926 fused_ordering(409) 00:13:20.926 fused_ordering(410) 00:13:20.926 fused_ordering(411) 00:13:20.926 fused_ordering(412) 00:13:20.926 fused_ordering(413) 00:13:20.926 fused_ordering(414) 00:13:20.926 fused_ordering(415) 00:13:20.926 fused_ordering(416) 00:13:20.926 fused_ordering(417) 00:13:20.926 fused_ordering(418) 00:13:20.926 fused_ordering(419) 00:13:20.926 fused_ordering(420) 00:13:20.926 fused_ordering(421) 00:13:20.926 fused_ordering(422) 00:13:20.926 fused_ordering(423) 00:13:20.926 fused_ordering(424) 00:13:20.926 fused_ordering(425) 00:13:20.926 fused_ordering(426) 00:13:20.926 fused_ordering(427) 00:13:20.926 fused_ordering(428) 00:13:20.926 fused_ordering(429) 00:13:20.926 fused_ordering(430) 00:13:20.926 fused_ordering(431) 00:13:20.926 fused_ordering(432) 00:13:20.926 fused_ordering(433) 00:13:20.926 fused_ordering(434) 00:13:20.926 fused_ordering(435) 00:13:20.926 fused_ordering(436) 00:13:20.926 fused_ordering(437) 00:13:20.926 fused_ordering(438) 00:13:20.926 fused_ordering(439) 00:13:20.926 fused_ordering(440) 00:13:20.926 fused_ordering(441) 00:13:20.926 fused_ordering(442) 00:13:20.926 fused_ordering(443) 00:13:20.926 fused_ordering(444) 00:13:20.926 fused_ordering(445) 00:13:20.926 fused_ordering(446) 00:13:20.926 fused_ordering(447) 00:13:20.926 fused_ordering(448) 00:13:20.926 fused_ordering(449) 00:13:20.926 fused_ordering(450) 00:13:20.926 fused_ordering(451) 00:13:20.926 fused_ordering(452) 00:13:20.926 fused_ordering(453) 00:13:20.926 fused_ordering(454) 00:13:20.926 fused_ordering(455) 00:13:20.926 fused_ordering(456) 00:13:20.926 fused_ordering(457) 00:13:20.926 fused_ordering(458) 00:13:20.926 fused_ordering(459) 00:13:20.926 fused_ordering(460) 00:13:20.926 fused_ordering(461) 00:13:20.926 fused_ordering(462) 00:13:20.926 fused_ordering(463) 00:13:20.926 fused_ordering(464) 00:13:20.926 fused_ordering(465) 00:13:20.926 fused_ordering(466) 00:13:20.926 fused_ordering(467) 00:13:20.926 fused_ordering(468) 00:13:20.926 fused_ordering(469) 00:13:20.926 fused_ordering(470) 00:13:20.926 fused_ordering(471) 00:13:20.926 fused_ordering(472) 00:13:20.926 fused_ordering(473) 00:13:20.926 fused_ordering(474) 00:13:20.926 fused_ordering(475) 00:13:20.926 fused_ordering(476) 00:13:20.926 fused_ordering(477) 00:13:20.926 fused_ordering(478) 00:13:20.926 fused_ordering(479) 00:13:20.926 fused_ordering(480) 00:13:20.926 fused_ordering(481) 00:13:20.926 fused_ordering(482) 00:13:20.926 fused_ordering(483) 00:13:20.926 fused_ordering(484) 00:13:20.926 fused_ordering(485) 00:13:20.926 fused_ordering(486) 00:13:20.926 fused_ordering(487) 00:13:20.926 fused_ordering(488) 00:13:20.926 fused_ordering(489) 00:13:20.926 fused_ordering(490) 00:13:20.926 fused_ordering(491) 00:13:20.926 fused_ordering(492) 00:13:20.926 fused_ordering(493) 00:13:20.926 fused_ordering(494) 00:13:20.926 fused_ordering(495) 00:13:20.926 fused_ordering(496) 00:13:20.926 fused_ordering(497) 00:13:20.926 fused_ordering(498) 00:13:20.926 fused_ordering(499) 00:13:20.926 fused_ordering(500) 00:13:20.926 fused_ordering(501) 00:13:20.926 fused_ordering(502) 00:13:20.926 fused_ordering(503) 00:13:20.926 fused_ordering(504) 00:13:20.926 fused_ordering(505) 00:13:20.926 fused_ordering(506) 00:13:20.926 fused_ordering(507) 00:13:20.926 fused_ordering(508) 00:13:20.926 fused_ordering(509) 00:13:20.926 fused_ordering(510) 00:13:20.926 fused_ordering(511) 00:13:20.926 fused_ordering(512) 00:13:20.926 fused_ordering(513) 00:13:20.926 fused_ordering(514) 00:13:20.926 fused_ordering(515) 00:13:20.926 fused_ordering(516) 00:13:20.926 fused_ordering(517) 00:13:20.926 fused_ordering(518) 00:13:20.926 fused_ordering(519) 00:13:20.926 fused_ordering(520) 00:13:20.926 fused_ordering(521) 00:13:20.926 fused_ordering(522) 00:13:20.926 fused_ordering(523) 00:13:20.926 fused_ordering(524) 00:13:20.926 fused_ordering(525) 00:13:20.926 fused_ordering(526) 00:13:20.926 fused_ordering(527) 00:13:20.926 fused_ordering(528) 00:13:20.926 fused_ordering(529) 00:13:20.926 fused_ordering(530) 00:13:20.926 fused_ordering(531) 00:13:20.926 fused_ordering(532) 00:13:20.926 fused_ordering(533) 00:13:20.926 fused_ordering(534) 00:13:20.926 fused_ordering(535) 00:13:20.926 fused_ordering(536) 00:13:20.926 fused_ordering(537) 00:13:20.926 fused_ordering(538) 00:13:20.926 fused_ordering(539) 00:13:20.926 fused_ordering(540) 00:13:20.926 fused_ordering(541) 00:13:20.926 fused_ordering(542) 00:13:20.926 fused_ordering(543) 00:13:20.926 fused_ordering(544) 00:13:20.926 fused_ordering(545) 00:13:20.926 fused_ordering(546) 00:13:20.926 fused_ordering(547) 00:13:20.926 fused_ordering(548) 00:13:20.926 fused_ordering(549) 00:13:20.926 fused_ordering(550) 00:13:20.926 fused_ordering(551) 00:13:20.926 fused_ordering(552) 00:13:20.926 fused_ordering(553) 00:13:20.926 fused_ordering(554) 00:13:20.926 fused_ordering(555) 00:13:20.926 fused_ordering(556) 00:13:20.926 fused_ordering(557) 00:13:20.926 fused_ordering(558) 00:13:20.926 fused_ordering(559) 00:13:20.926 fused_ordering(560) 00:13:20.926 fused_ordering(561) 00:13:20.926 fused_ordering(562) 00:13:20.926 fused_ordering(563) 00:13:20.926 fused_ordering(564) 00:13:20.926 fused_ordering(565) 00:13:20.926 fused_ordering(566) 00:13:20.926 fused_ordering(567) 00:13:20.926 fused_ordering(568) 00:13:20.926 fused_ordering(569) 00:13:20.926 fused_ordering(570) 00:13:20.926 fused_ordering(571) 00:13:20.926 fused_ordering(572) 00:13:20.926 fused_ordering(573) 00:13:20.926 fused_ordering(574) 00:13:20.926 fused_ordering(575) 00:13:20.926 fused_ordering(576) 00:13:20.926 fused_ordering(577) 00:13:20.926 fused_ordering(578) 00:13:20.926 fused_ordering(579) 00:13:20.926 fused_ordering(580) 00:13:20.926 fused_ordering(581) 00:13:20.926 fused_ordering(582) 00:13:20.926 fused_ordering(583) 00:13:20.926 fused_ordering(584) 00:13:20.926 fused_ordering(585) 00:13:20.926 fused_ordering(586) 00:13:20.926 fused_ordering(587) 00:13:20.926 fused_ordering(588) 00:13:20.926 fused_ordering(589) 00:13:20.926 fused_ordering(590) 00:13:20.926 fused_ordering(591) 00:13:20.926 fused_ordering(592) 00:13:20.926 fused_ordering(593) 00:13:20.926 fused_ordering(594) 00:13:20.926 fused_ordering(595) 00:13:20.926 fused_ordering(596) 00:13:20.926 fused_ordering(597) 00:13:20.926 fused_ordering(598) 00:13:20.926 fused_ordering(599) 00:13:20.926 fused_ordering(600) 00:13:20.926 fused_ordering(601) 00:13:20.926 fused_ordering(602) 00:13:20.926 fused_ordering(603) 00:13:20.926 fused_ordering(604) 00:13:20.926 fused_ordering(605) 00:13:20.926 fused_ordering(606) 00:13:20.926 fused_ordering(607) 00:13:20.926 fused_ordering(608) 00:13:20.926 fused_ordering(609) 00:13:20.926 fused_ordering(610) 00:13:20.926 fused_ordering(611) 00:13:20.926 fused_ordering(612) 00:13:20.926 fused_ordering(613) 00:13:20.926 fused_ordering(614) 00:13:20.926 fused_ordering(615) 00:13:20.926 fused_ordering(616) 00:13:20.926 fused_ordering(617) 00:13:20.926 fused_ordering(618) 00:13:20.926 fused_ordering(619) 00:13:20.926 fused_ordering(620) 00:13:20.926 fused_ordering(621) 00:13:20.926 fused_ordering(622) 00:13:20.926 fused_ordering(623) 00:13:20.926 fused_ordering(624) 00:13:20.926 fused_ordering(625) 00:13:20.926 fused_ordering(626) 00:13:20.926 fused_ordering(627) 00:13:20.926 fused_ordering(628) 00:13:20.926 fused_ordering(629) 00:13:20.926 fused_ordering(630) 00:13:20.926 fused_ordering(631) 00:13:20.926 fused_ordering(632) 00:13:20.926 fused_ordering(633) 00:13:20.926 fused_ordering(634) 00:13:20.926 fused_ordering(635) 00:13:20.926 fused_ordering(636) 00:13:20.926 fused_ordering(637) 00:13:20.926 fused_ordering(638) 00:13:20.926 fused_ordering(639) 00:13:20.926 fused_ordering(640) 00:13:20.926 fused_ordering(641) 00:13:20.926 fused_ordering(642) 00:13:20.926 fused_ordering(643) 00:13:20.926 fused_ordering(644) 00:13:20.926 fused_ordering(645) 00:13:20.926 fused_ordering(646) 00:13:20.926 fused_ordering(647) 00:13:20.926 fused_ordering(648) 00:13:20.926 fused_ordering(649) 00:13:20.926 fused_ordering(650) 00:13:20.926 fused_ordering(651) 00:13:20.926 fused_ordering(652) 00:13:20.926 fused_ordering(653) 00:13:20.926 fused_ordering(654) 00:13:20.926 fused_ordering(655) 00:13:20.926 fused_ordering(656) 00:13:20.926 fused_ordering(657) 00:13:20.926 fused_ordering(658) 00:13:20.926 fused_ordering(659) 00:13:20.926 fused_ordering(660) 00:13:20.926 fused_ordering(661) 00:13:20.926 fused_ordering(662) 00:13:20.926 fused_ordering(663) 00:13:20.926 fused_ordering(664) 00:13:20.926 fused_ordering(665) 00:13:20.926 fused_ordering(666) 00:13:20.926 fused_ordering(667) 00:13:20.926 fused_ordering(668) 00:13:20.926 fused_ordering(669) 00:13:20.926 fused_ordering(670) 00:13:20.926 fused_ordering(671) 00:13:20.926 fused_ordering(672) 00:13:20.926 fused_ordering(673) 00:13:20.926 fused_ordering(674) 00:13:20.926 fused_ordering(675) 00:13:20.926 fused_ordering(676) 00:13:20.926 fused_ordering(677) 00:13:20.926 fused_ordering(678) 00:13:20.926 fused_ordering(679) 00:13:20.926 fused_ordering(680) 00:13:20.926 fused_ordering(681) 00:13:20.926 fused_ordering(682) 00:13:20.926 fused_ordering(683) 00:13:20.926 fused_ordering(684) 00:13:20.926 fused_ordering(685) 00:13:20.926 fused_ordering(686) 00:13:20.926 fused_ordering(687) 00:13:20.926 fused_ordering(688) 00:13:20.926 fused_ordering(689) 00:13:20.926 fused_ordering(690) 00:13:20.926 fused_ordering(691) 00:13:20.926 fused_ordering(692) 00:13:20.926 fused_ordering(693) 00:13:20.926 fused_ordering(694) 00:13:20.926 fused_ordering(695) 00:13:20.926 fused_ordering(696) 00:13:20.926 fused_ordering(697) 00:13:20.926 fused_ordering(698) 00:13:20.926 fused_ordering(699) 00:13:20.926 fused_ordering(700) 00:13:20.926 fused_ordering(701) 00:13:20.926 fused_ordering(702) 00:13:20.926 fused_ordering(703) 00:13:20.926 fused_ordering(704) 00:13:20.926 fused_ordering(705) 00:13:20.926 fused_ordering(706) 00:13:20.926 fused_ordering(707) 00:13:20.926 fused_ordering(708) 00:13:20.926 fused_ordering(709) 00:13:20.926 fused_ordering(710) 00:13:20.926 fused_ordering(711) 00:13:20.926 fused_ordering(712) 00:13:20.926 fused_ordering(713) 00:13:20.926 fused_ordering(714) 00:13:20.926 fused_ordering(715) 00:13:20.926 fused_ordering(716) 00:13:20.926 fused_ordering(717) 00:13:20.926 fused_ordering(718) 00:13:20.926 fused_ordering(719) 00:13:20.926 fused_ordering(720) 00:13:20.926 fused_ordering(721) 00:13:20.926 fused_ordering(722) 00:13:20.926 fused_ordering(723) 00:13:20.926 fused_ordering(724) 00:13:20.926 fused_ordering(725) 00:13:20.926 fused_ordering(726) 00:13:20.926 fused_ordering(727) 00:13:20.926 fused_ordering(728) 00:13:20.926 fused_ordering(729) 00:13:20.926 fused_ordering(730) 00:13:20.926 fused_ordering(731) 00:13:20.926 fused_ordering(732) 00:13:20.926 fused_ordering(733) 00:13:20.926 fused_ordering(734) 00:13:20.926 fused_ordering(735) 00:13:20.926 fused_ordering(736) 00:13:20.926 fused_ordering(737) 00:13:20.926 fused_ordering(738) 00:13:20.926 fused_ordering(739) 00:13:20.926 fused_ordering(740) 00:13:20.926 fused_ordering(741) 00:13:20.926 fused_ordering(742) 00:13:20.926 fused_ordering(743) 00:13:20.926 fused_ordering(744) 00:13:20.926 fused_ordering(745) 00:13:20.926 fused_ordering(746) 00:13:20.926 fused_ordering(747) 00:13:20.926 fused_ordering(748) 00:13:20.926 fused_ordering(749) 00:13:20.926 fused_ordering(750) 00:13:20.926 fused_ordering(751) 00:13:20.926 fused_ordering(752) 00:13:20.926 fused_ordering(753) 00:13:20.926 fused_ordering(754) 00:13:20.926 fused_ordering(755) 00:13:20.926 fused_ordering(756) 00:13:20.926 fused_ordering(757) 00:13:20.926 fused_ordering(758) 00:13:20.926 fused_ordering(759) 00:13:20.926 fused_ordering(760) 00:13:20.926 fused_ordering(761) 00:13:20.926 fused_ordering(762) 00:13:20.926 fused_ordering(763) 00:13:20.926 fused_ordering(764) 00:13:20.926 fused_ordering(765) 00:13:20.926 fused_ordering(766) 00:13:20.926 fused_ordering(767) 00:13:20.926 fused_ordering(768) 00:13:20.926 fused_ordering(769) 00:13:20.926 fused_ordering(770) 00:13:20.926 fused_ordering(771) 00:13:20.926 fused_ordering(772) 00:13:20.926 fused_ordering(773) 00:13:20.926 fused_ordering(774) 00:13:20.926 fused_ordering(775) 00:13:20.926 fused_ordering(776) 00:13:20.926 fused_ordering(777) 00:13:20.926 fused_ordering(778) 00:13:20.926 fused_ordering(779) 00:13:20.926 fused_ordering(780) 00:13:20.926 fused_ordering(781) 00:13:20.926 fused_ordering(782) 00:13:20.926 fused_ordering(783) 00:13:20.926 fused_ordering(784) 00:13:20.926 fused_ordering(785) 00:13:20.926 fused_ordering(786) 00:13:20.926 fused_ordering(787) 00:13:20.926 fused_ordering(788) 00:13:20.926 fused_ordering(789) 00:13:20.926 fused_ordering(790) 00:13:20.926 fused_ordering(791) 00:13:20.926 fused_ordering(792) 00:13:20.926 fused_ordering(793) 00:13:20.926 fused_ordering(794) 00:13:20.926 fused_ordering(795) 00:13:20.926 fused_ordering(796) 00:13:20.926 fused_ordering(797) 00:13:20.926 fused_ordering(798) 00:13:20.926 fused_ordering(799) 00:13:20.926 fused_ordering(800) 00:13:20.926 fused_ordering(801) 00:13:20.926 fused_ordering(802) 00:13:20.926 fused_ordering(803) 00:13:20.926 fused_ordering(804) 00:13:20.926 fused_ordering(805) 00:13:20.926 fused_ordering(806) 00:13:20.926 fused_ordering(807) 00:13:20.926 fused_ordering(808) 00:13:20.926 fused_ordering(809) 00:13:20.927 fused_ordering(810) 00:13:20.927 fused_ordering(811) 00:13:20.927 fused_ordering(812) 00:13:20.927 fused_ordering(813) 00:13:20.927 fused_ordering(814) 00:13:20.927 fused_ordering(815) 00:13:20.927 fused_ordering(816) 00:13:20.927 fused_ordering(817) 00:13:20.927 fused_ordering(818) 00:13:20.927 fused_ordering(819) 00:13:20.927 fused_ordering(820) 00:13:21.184 fused_ordering(821) 00:13:21.184 fused_ordering(822) 00:13:21.184 fused_ordering(823) 00:13:21.184 fused_ordering(824) 00:13:21.184 fused_ordering(825) 00:13:21.184 fused_ordering(826) 00:13:21.184 fused_ordering(827) 00:13:21.184 fused_ordering(828) 00:13:21.184 fused_ordering(829) 00:13:21.184 fused_ordering(830) 00:13:21.184 fused_ordering(831) 00:13:21.184 fused_ordering(832) 00:13:21.184 fused_ordering(833) 00:13:21.184 fused_ordering(834) 00:13:21.184 fused_ordering(835) 00:13:21.184 fused_ordering(836) 00:13:21.184 fused_ordering(837) 00:13:21.184 fused_ordering(838) 00:13:21.184 fused_ordering(839) 00:13:21.184 fused_ordering(840) 00:13:21.184 fused_ordering(841) 00:13:21.184 fused_ordering(842) 00:13:21.184 fused_ordering(843) 00:13:21.184 fused_ordering(844) 00:13:21.184 fused_ordering(845) 00:13:21.184 fused_ordering(846) 00:13:21.184 fused_ordering(847) 00:13:21.184 fused_ordering(848) 00:13:21.184 fused_ordering(849) 00:13:21.184 fused_ordering(850) 00:13:21.184 fused_ordering(851) 00:13:21.184 fused_ordering(852) 00:13:21.184 fused_ordering(853) 00:13:21.184 fused_ordering(854) 00:13:21.184 fused_ordering(855) 00:13:21.184 fused_ordering(856) 00:13:21.184 fused_ordering(857) 00:13:21.184 fused_ordering(858) 00:13:21.184 fused_ordering(859) 00:13:21.184 fused_ordering(860) 00:13:21.184 fused_ordering(861) 00:13:21.184 fused_ordering(862) 00:13:21.184 fused_ordering(863) 00:13:21.184 fused_ordering(864) 00:13:21.184 fused_ordering(865) 00:13:21.184 fused_ordering(866) 00:13:21.184 fused_ordering(867) 00:13:21.184 fused_ordering(868) 00:13:21.184 fused_ordering(869) 00:13:21.184 fused_ordering(870) 00:13:21.184 fused_ordering(871) 00:13:21.184 fused_ordering(872) 00:13:21.184 fused_ordering(873) 00:13:21.184 fused_ordering(874) 00:13:21.184 fused_ordering(875) 00:13:21.184 fused_ordering(876) 00:13:21.184 fused_ordering(877) 00:13:21.184 fused_ordering(878) 00:13:21.184 fused_ordering(879) 00:13:21.184 fused_ordering(880) 00:13:21.184 fused_ordering(881) 00:13:21.184 fused_ordering(882) 00:13:21.184 fused_ordering(883) 00:13:21.184 fused_ordering(884) 00:13:21.184 fused_ordering(885) 00:13:21.184 fused_ordering(886) 00:13:21.184 fused_ordering(887) 00:13:21.184 fused_ordering(888) 00:13:21.184 fused_ordering(889) 00:13:21.184 fused_ordering(890) 00:13:21.184 fused_ordering(891) 00:13:21.184 fused_ordering(892) 00:13:21.184 fused_ordering(893) 00:13:21.185 fused_ordering(894) 00:13:21.185 fused_ordering(895) 00:13:21.185 fused_ordering(896) 00:13:21.185 fused_ordering(897) 00:13:21.185 fused_ordering(898) 00:13:21.185 fused_ordering(899) 00:13:21.185 fused_ordering(900) 00:13:21.185 fused_ordering(901) 00:13:21.185 fused_ordering(902) 00:13:21.185 fused_ordering(903) 00:13:21.185 fused_ordering(904) 00:13:21.185 fused_ordering(905) 00:13:21.185 fused_ordering(906) 00:13:21.185 fused_ordering(907) 00:13:21.185 fused_ordering(908) 00:13:21.185 fused_ordering(909) 00:13:21.185 fused_ordering(910) 00:13:21.185 fused_ordering(911) 00:13:21.185 fused_ordering(912) 00:13:21.185 fused_ordering(913) 00:13:21.185 fused_ordering(914) 00:13:21.185 fused_ordering(915) 00:13:21.185 fused_ordering(916) 00:13:21.185 fused_ordering(917) 00:13:21.185 fused_ordering(918) 00:13:21.185 fused_ordering(919) 00:13:21.185 fused_ordering(920) 00:13:21.185 fused_ordering(921) 00:13:21.185 fused_ordering(922) 00:13:21.185 fused_ordering(923) 00:13:21.185 fused_ordering(924) 00:13:21.185 fused_ordering(925) 00:13:21.185 fused_ordering(926) 00:13:21.185 fused_ordering(927) 00:13:21.185 fused_ordering(928) 00:13:21.185 fused_ordering(929) 00:13:21.185 fused_ordering(930) 00:13:21.185 fused_ordering(931) 00:13:21.185 fused_ordering(932) 00:13:21.185 fused_ordering(933) 00:13:21.185 fused_ordering(934) 00:13:21.185 fused_ordering(935) 00:13:21.185 fused_ordering(936) 00:13:21.185 fused_ordering(937) 00:13:21.185 fused_ordering(938) 00:13:21.185 fused_ordering(939) 00:13:21.185 fused_ordering(940) 00:13:21.185 fused_ordering(941) 00:13:21.185 fused_ordering(942) 00:13:21.185 fused_ordering(943) 00:13:21.185 fused_ordering(944) 00:13:21.185 fused_ordering(945) 00:13:21.185 fused_ordering(946) 00:13:21.185 fused_ordering(947) 00:13:21.185 fused_ordering(948) 00:13:21.185 fused_ordering(949) 00:13:21.185 fused_ordering(950) 00:13:21.185 fused_ordering(951) 00:13:21.185 fused_ordering(952) 00:13:21.185 fused_ordering(953) 00:13:21.185 fused_ordering(954) 00:13:21.185 fused_ordering(955) 00:13:21.185 fused_ordering(956) 00:13:21.185 fused_ordering(957) 00:13:21.185 fused_ordering(958) 00:13:21.185 fused_ordering(959) 00:13:21.185 fused_ordering(960) 00:13:21.185 fused_ordering(961) 00:13:21.185 fused_ordering(962) 00:13:21.185 fused_ordering(963) 00:13:21.185 fused_ordering(964) 00:13:21.185 fused_ordering(965) 00:13:21.185 fused_ordering(966) 00:13:21.185 fused_ordering(967) 00:13:21.185 fused_ordering(968) 00:13:21.185 fused_ordering(969) 00:13:21.185 fused_ordering(970) 00:13:21.185 fused_ordering(971) 00:13:21.185 fused_ordering(972) 00:13:21.185 fused_ordering(973) 00:13:21.185 fused_ordering(974) 00:13:21.185 fused_ordering(975) 00:13:21.185 fused_ordering(976) 00:13:21.185 fused_ordering(977) 00:13:21.185 fused_ordering(978) 00:13:21.185 fused_ordering(979) 00:13:21.185 fused_ordering(980) 00:13:21.185 fused_ordering(981) 00:13:21.185 fused_ordering(982) 00:13:21.185 fused_ordering(983) 00:13:21.185 fused_ordering(984) 00:13:21.185 fused_ordering(985) 00:13:21.185 fused_ordering(986) 00:13:21.185 fused_ordering(987) 00:13:21.185 fused_ordering(988) 00:13:21.185 fused_ordering(989) 00:13:21.185 fused_ordering(990) 00:13:21.185 fused_ordering(991) 00:13:21.185 fused_ordering(992) 00:13:21.185 fused_ordering(993) 00:13:21.185 fused_ordering(994) 00:13:21.185 fused_ordering(995) 00:13:21.185 fused_ordering(996) 00:13:21.185 fused_ordering(997) 00:13:21.185 fused_ordering(998) 00:13:21.185 fused_ordering(999) 00:13:21.185 fused_ordering(1000) 00:13:21.185 fused_ordering(1001) 00:13:21.185 fused_ordering(1002) 00:13:21.185 fused_ordering(1003) 00:13:21.185 fused_ordering(1004) 00:13:21.185 fused_ordering(1005) 00:13:21.185 fused_ordering(1006) 00:13:21.185 fused_ordering(1007) 00:13:21.185 fused_ordering(1008) 00:13:21.185 fused_ordering(1009) 00:13:21.185 fused_ordering(1010) 00:13:21.185 fused_ordering(1011) 00:13:21.185 fused_ordering(1012) 00:13:21.185 fused_ordering(1013) 00:13:21.185 fused_ordering(1014) 00:13:21.185 fused_ordering(1015) 00:13:21.185 fused_ordering(1016) 00:13:21.185 fused_ordering(1017) 00:13:21.185 fused_ordering(1018) 00:13:21.185 fused_ordering(1019) 00:13:21.185 fused_ordering(1020) 00:13:21.185 fused_ordering(1021) 00:13:21.185 fused_ordering(1022) 00:13:21.185 fused_ordering(1023) 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:21.185 rmmod nvme_rdma 00:13:21.185 rmmod nvme_fabrics 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 731199 ']' 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 731199 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 731199 ']' 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 731199 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.185 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 731199 00:13:21.444 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:21.444 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:21.444 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 731199' 00:13:21.444 killing process with pid 731199 00:13:21.444 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 731199 00:13:21.444 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 731199 00:13:21.444 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:21.444 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:21.444 00:13:21.444 real 0m7.292s 00:13:21.444 user 0m3.582s 00:13:21.444 sys 0m4.798s 00:13:21.444 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.444 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.444 ************************************ 00:13:21.444 END TEST nvmf_fused_ordering 00:13:21.444 ************************************ 00:13:21.444 04:01:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:13:21.444 04:01:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:21.444 04:01:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.444 04:01:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:21.703 ************************************ 00:13:21.703 START TEST nvmf_ns_masking 00:13:21.703 ************************************ 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:13:21.703 * Looking for test storage... 00:13:21.703 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.703 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:21.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.704 --rc genhtml_branch_coverage=1 00:13:21.704 --rc genhtml_function_coverage=1 00:13:21.704 --rc genhtml_legend=1 00:13:21.704 --rc geninfo_all_blocks=1 00:13:21.704 --rc geninfo_unexecuted_blocks=1 00:13:21.704 00:13:21.704 ' 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:21.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.704 --rc genhtml_branch_coverage=1 00:13:21.704 --rc genhtml_function_coverage=1 00:13:21.704 --rc genhtml_legend=1 00:13:21.704 --rc geninfo_all_blocks=1 00:13:21.704 --rc geninfo_unexecuted_blocks=1 00:13:21.704 00:13:21.704 ' 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:21.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.704 --rc genhtml_branch_coverage=1 00:13:21.704 --rc genhtml_function_coverage=1 00:13:21.704 --rc genhtml_legend=1 00:13:21.704 --rc geninfo_all_blocks=1 00:13:21.704 --rc geninfo_unexecuted_blocks=1 00:13:21.704 00:13:21.704 ' 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:21.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.704 --rc genhtml_branch_coverage=1 00:13:21.704 --rc genhtml_function_coverage=1 00:13:21.704 --rc genhtml_legend=1 00:13:21.704 --rc geninfo_all_blocks=1 00:13:21.704 --rc geninfo_unexecuted_blocks=1 00:13:21.704 00:13:21.704 ' 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.704 04:01:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:21.704 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7b10d7ee-99e9-4d70-991b-61420da6cd8e 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=686ddeb8-21b8-4c7b-824a-55389836314a 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c4db2ff5-43d6-4002-9a79-e1cded3c291b 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:21.704 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.705 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.705 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.705 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:21.705 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:21.705 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:21.705 04:01:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:28.267 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:28.267 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.267 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:28.267 Found net devices under 0000:18:00.0: mlx_0_0 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:28.268 Found net devices under 0000:18:00.1: mlx_0_1 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:28.268 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:28.268 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:13:28.268 altname enp24s0f0np0 00:13:28.268 altname ens785f0np0 00:13:28.268 inet 192.168.100.8/24 scope global mlx_0_0 00:13:28.268 valid_lft forever preferred_lft forever 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:28.268 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:28.268 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:13:28.268 altname enp24s0f1np1 00:13:28.268 altname ens785f1np1 00:13:28.268 inet 192.168.100.9/24 scope global mlx_0_1 00:13:28.268 valid_lft forever preferred_lft forever 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:28.268 192.168.100.9' 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:28.268 192.168.100.9' 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:28.268 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:28.268 192.168.100.9' 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=734624 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 734624 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 734624 ']' 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:28.269 [2024-12-10 04:01:21.705172] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:28.269 [2024-12-10 04:01:21.705221] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.269 [2024-12-10 04:01:21.767019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.269 [2024-12-10 04:01:21.805218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.269 [2024-12-10 04:01:21.805273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.269 [2024-12-10 04:01:21.805280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.269 [2024-12-10 04:01:21.805287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.269 [2024-12-10 04:01:21.805308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.269 [2024-12-10 04:01:21.805775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.269 04:01:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:28.269 [2024-12-10 04:01:22.111066] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1977dc0/0x197c2b0) succeed. 00:13:28.269 [2024-12-10 04:01:22.119098] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1979270/0x19bd950) succeed. 00:13:28.269 04:01:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:28.269 04:01:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:28.269 04:01:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:28.269 Malloc1 00:13:28.269 04:01:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:28.269 Malloc2 00:13:28.269 04:01:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:28.527 04:01:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:28.527 04:01:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:28.785 [2024-12-10 04:01:23.024125] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:28.785 04:01:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:28.785 04:01:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c4db2ff5-43d6-4002-9a79-e1cded3c291b -a 192.168.100.8 -s 4420 -i 4 00:13:29.043 04:01:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:29.043 04:01:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:29.043 04:01:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.043 04:01:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:29.043 04:01:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:31.572 [ 0]:0x1 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=86ef28d6a83d46788ec0fe74e5ce31e4 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 86ef28d6a83d46788ec0fe74e5ce31e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:31.572 [ 0]:0x1 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=86ef28d6a83d46788ec0fe74e5ce31e4 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 86ef28d6a83d46788ec0fe74e5ce31e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:31.572 [ 1]:0x2 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=58fc2bc950d74888a497a0288a76c80d 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 58fc2bc950d74888a497a0288a76c80d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:31.572 04:01:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.830 04:01:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.089 04:01:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:32.089 04:01:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:32.089 04:01:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c4db2ff5-43d6-4002-9a79-e1cded3c291b -a 192.168.100.8 -s 4420 -i 4 00:13:32.373 04:01:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:32.373 04:01:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:32.373 04:01:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:32.373 04:01:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:32.373 04:01:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:32.373 04:01:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:34.902 [ 0]:0x2 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=58fc2bc950d74888a497a0288a76c80d 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 58fc2bc950d74888a497a0288a76c80d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.902 04:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:34.902 [ 0]:0x1 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=86ef28d6a83d46788ec0fe74e5ce31e4 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 86ef28d6a83d46788ec0fe74e5ce31e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:34.902 [ 1]:0x2 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=58fc2bc950d74888a497a0288a76c80d 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 58fc2bc950d74888a497a0288a76c80d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:34.902 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:35.160 [ 0]:0x2 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=58fc2bc950d74888a497a0288a76c80d 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 58fc2bc950d74888a497a0288a76c80d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:35.160 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.418 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:35.677 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:35.677 04:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c4db2ff5-43d6-4002-9a79-e1cded3c291b -a 192.168.100.8 -s 4420 -i 4 00:13:35.935 04:01:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:35.935 04:01:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:35.935 04:01:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.935 04:01:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:35.935 04:01:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:35.935 04:01:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:37.833 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:37.833 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:37.833 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.833 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:37.833 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.833 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:38.090 [ 0]:0x1 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=86ef28d6a83d46788ec0fe74e5ce31e4 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 86ef28d6a83d46788ec0fe74e5ce31e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:38.090 [ 1]:0x2 00:13:38.090 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:38.091 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.091 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=58fc2bc950d74888a497a0288a76c80d 00:13:38.091 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 58fc2bc950d74888a497a0288a76c80d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.091 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.349 [ 0]:0x2 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=58fc2bc950d74888a497a0288a76c80d 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 58fc2bc950d74888a497a0288a76c80d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:38.349 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:38.607 [2024-12-10 04:01:32.786200] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:38.607 request: 00:13:38.607 { 00:13:38.607 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.607 "nsid": 2, 00:13:38.607 "host": "nqn.2016-06.io.spdk:host1", 00:13:38.607 "method": "nvmf_ns_remove_host", 00:13:38.607 "req_id": 1 00:13:38.607 } 00:13:38.607 Got JSON-RPC error response 00:13:38.607 response: 00:13:38.608 { 00:13:38.608 "code": -32602, 00:13:38.608 "message": "Invalid parameters" 00:13:38.608 } 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:38.608 [ 0]:0x2 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=58fc2bc950d74888a497a0288a76c80d 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 58fc2bc950d74888a497a0288a76c80d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:38.608 04:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.866 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:38.866 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=736798 00:13:38.866 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.866 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 736798 /var/tmp/host.sock 00:13:38.866 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 736798 ']' 00:13:38.866 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:38.866 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.866 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:38.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:38.866 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.866 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.124 [2024-12-10 04:01:33.261590] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:39.124 [2024-12-10 04:01:33.261631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736798 ] 00:13:39.124 [2024-12-10 04:01:33.314936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.124 [2024-12-10 04:01:33.354844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.382 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.382 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:39.382 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.382 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:39.640 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7b10d7ee-99e9-4d70-991b-61420da6cd8e 00:13:39.640 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:39.640 04:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7B10D7EE99E94D70991B61420DA6CD8E -i 00:13:39.898 04:01:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 686ddeb8-21b8-4c7b-824a-55389836314a 00:13:39.898 04:01:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:39.898 04:01:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 686DDEB821B84C7B824A55389836314A -i 00:13:39.898 04:01:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:40.156 04:01:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:40.414 04:01:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:40.414 04:01:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:40.673 nvme0n1 00:13:40.673 04:01:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:40.673 04:01:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:40.673 nvme1n2 00:13:40.929 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:40.929 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:40.929 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:40.929 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:40.929 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:40.929 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:40.929 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:40.929 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:40.929 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:41.187 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7b10d7ee-99e9-4d70-991b-61420da6cd8e == \7\b\1\0\d\7\e\e\-\9\9\e\9\-\4\d\7\0\-\9\9\1\b\-\6\1\4\2\0\d\a\6\c\d\8\e ]] 00:13:41.187 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:41.187 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:41.187 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:41.444 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 686ddeb8-21b8-4c7b-824a-55389836314a == \6\8\6\d\d\e\b\8\-\2\1\b\8\-\4\c\7\b\-\8\2\4\a\-\5\5\3\8\9\8\3\6\3\1\4\a ]] 00:13:41.444 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.444 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.702 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 7b10d7ee-99e9-4d70-991b-61420da6cd8e 00:13:41.702 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:41.702 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7B10D7EE99E94D70991B61420DA6CD8E 00:13:41.703 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:41.703 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7B10D7EE99E94D70991B61420DA6CD8E 00:13:41.703 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:41.703 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.703 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:41.703 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.703 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:41.703 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.703 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:41.703 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:41.703 04:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7B10D7EE99E94D70991B61420DA6CD8E 00:13:41.960 [2024-12-10 04:01:36.123870] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:41.960 [2024-12-10 04:01:36.123901] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:41.960 [2024-12-10 04:01:36.123909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.960 request: 00:13:41.960 { 00:13:41.960 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.960 "namespace": { 00:13:41.960 "bdev_name": "invalid", 00:13:41.960 "nsid": 1, 00:13:41.960 "nguid": "7B10D7EE99E94D70991B61420DA6CD8E", 00:13:41.960 "no_auto_visible": false, 00:13:41.960 "hide_metadata": false 00:13:41.960 }, 00:13:41.960 "method": "nvmf_subsystem_add_ns", 00:13:41.960 "req_id": 1 00:13:41.960 } 00:13:41.960 Got JSON-RPC error response 00:13:41.960 response: 00:13:41.960 { 00:13:41.960 "code": -32602, 00:13:41.960 "message": "Invalid parameters" 00:13:41.960 } 00:13:41.960 04:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:41.960 04:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:41.961 04:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:41.961 04:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:41.961 04:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 7b10d7ee-99e9-4d70-991b-61420da6cd8e 00:13:41.961 04:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:41.961 04:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7B10D7EE99E94D70991B61420DA6CD8E -i 00:13:41.961 04:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 736798 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 736798 ']' 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 736798 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 736798 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 736798' 00:13:44.499 killing process with pid 736798 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 736798 00:13:44.499 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 736798 00:13:44.757 04:01:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:44.757 rmmod nvme_rdma 00:13:44.757 rmmod nvme_fabrics 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 734624 ']' 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 734624 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 734624 ']' 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 734624 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.757 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 734624 00:13:45.016 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.016 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.016 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 734624' 00:13:45.016 killing process with pid 734624 00:13:45.016 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 734624 00:13:45.016 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 734624 00:13:45.016 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:45.016 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:45.016 00:13:45.016 real 0m23.564s 00:13:45.016 user 0m30.131s 00:13:45.016 sys 0m6.229s 00:13:45.016 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.016 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:45.016 ************************************ 00:13:45.016 END TEST nvmf_ns_masking 00:13:45.016 ************************************ 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:45.275 ************************************ 00:13:45.275 START TEST nvmf_nvme_cli 00:13:45.275 ************************************ 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:45.275 * Looking for test storage... 00:13:45.275 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:45.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.275 --rc genhtml_branch_coverage=1 00:13:45.275 --rc genhtml_function_coverage=1 00:13:45.275 --rc genhtml_legend=1 00:13:45.275 --rc geninfo_all_blocks=1 00:13:45.275 --rc geninfo_unexecuted_blocks=1 00:13:45.275 00:13:45.275 ' 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:45.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.275 --rc genhtml_branch_coverage=1 00:13:45.275 --rc genhtml_function_coverage=1 00:13:45.275 --rc genhtml_legend=1 00:13:45.275 --rc geninfo_all_blocks=1 00:13:45.275 --rc geninfo_unexecuted_blocks=1 00:13:45.275 00:13:45.275 ' 00:13:45.275 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:45.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.275 --rc genhtml_branch_coverage=1 00:13:45.276 --rc genhtml_function_coverage=1 00:13:45.276 --rc genhtml_legend=1 00:13:45.276 --rc geninfo_all_blocks=1 00:13:45.276 --rc geninfo_unexecuted_blocks=1 00:13:45.276 00:13:45.276 ' 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:45.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.276 --rc genhtml_branch_coverage=1 00:13:45.276 --rc genhtml_function_coverage=1 00:13:45.276 --rc genhtml_legend=1 00:13:45.276 --rc geninfo_all_blocks=1 00:13:45.276 --rc geninfo_unexecuted_blocks=1 00:13:45.276 00:13:45.276 ' 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:45.276 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:45.276 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:45.277 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:45.277 04:01:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:51.973 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:51.973 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:51.973 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:51.974 Found net devices under 0000:18:00.0: mlx_0_0 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:51.974 Found net devices under 0000:18:00.1: mlx_0_1 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:51.974 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:51.974 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:13:51.974 altname enp24s0f0np0 00:13:51.974 altname ens785f0np0 00:13:51.974 inet 192.168.100.8/24 scope global mlx_0_0 00:13:51.974 valid_lft forever preferred_lft forever 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:51.974 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:51.974 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:13:51.974 altname enp24s0f1np1 00:13:51.974 altname ens785f1np1 00:13:51.974 inet 192.168.100.9/24 scope global mlx_0_1 00:13:51.974 valid_lft forever preferred_lft forever 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:51.974 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:51.975 192.168.100.9' 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:51.975 192.168.100.9' 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:51.975 192.168.100.9' 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=741383 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 741383 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 741383 ']' 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:51.975 [2024-12-10 04:01:45.419226] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:13:51.975 [2024-12-10 04:01:45.419278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.975 [2024-12-10 04:01:45.479499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:51.975 [2024-12-10 04:01:45.522593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.975 [2024-12-10 04:01:45.522629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.975 [2024-12-10 04:01:45.522636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.975 [2024-12-10 04:01:45.522642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.975 [2024-12-10 04:01:45.522646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.975 [2024-12-10 04:01:45.525285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.975 [2024-12-10 04:01:45.525302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.975 [2024-12-10 04:01:45.525384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.975 [2024-12-10 04:01:45.525386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:51.975 [2024-12-10 04:01:45.692755] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x125e0c0/0x12625b0) succeed. 00:13:51.975 [2024-12-10 04:01:45.700901] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x125f750/0x12a3c50) succeed. 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:51.975 Malloc0 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:51.975 Malloc1 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:51.975 [2024-12-10 04:01:45.906292] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.975 04:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:13:51.975 00:13:51.975 Discovery Log Number of Records 2, Generation counter 2 00:13:51.975 =====Discovery Log Entry 0====== 00:13:51.975 trtype: rdma 00:13:51.975 adrfam: ipv4 00:13:51.975 subtype: current discovery subsystem 00:13:51.975 treq: not required 00:13:51.975 portid: 0 00:13:51.975 trsvcid: 4420 00:13:51.975 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:51.975 traddr: 192.168.100.8 00:13:51.975 eflags: explicit discovery connections, duplicate discovery information 00:13:51.975 rdma_prtype: not specified 00:13:51.975 rdma_qptype: connected 00:13:51.975 rdma_cms: rdma-cm 00:13:51.975 rdma_pkey: 0x0000 00:13:51.975 =====Discovery Log Entry 1====== 00:13:51.975 trtype: rdma 00:13:51.975 adrfam: ipv4 00:13:51.975 subtype: nvme subsystem 00:13:51.975 treq: not required 00:13:51.975 portid: 0 00:13:51.975 trsvcid: 4420 00:13:51.975 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:51.975 traddr: 192.168.100.8 00:13:51.975 eflags: none 00:13:51.975 rdma_prtype: not specified 00:13:51.975 rdma_qptype: connected 00:13:51.975 rdma_cms: rdma-cm 00:13:51.975 rdma_pkey: 0x0000 00:13:51.975 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:51.975 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:51.975 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:51.976 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:51.976 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:51.976 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:51.976 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:51.976 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:51.976 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:51.976 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:51.976 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:52.909 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:52.909 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:52.909 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.909 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:52.909 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:52.909 04:01:46 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:54.810 04:01:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:54.810 /dev/nvme0n2 ]] 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:54.810 04:01:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:55.745 rmmod nvme_rdma 00:13:55.745 rmmod nvme_fabrics 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 741383 ']' 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 741383 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 741383 ']' 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 741383 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.745 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 741383 00:13:56.003 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.004 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.004 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 741383' 00:13:56.004 killing process with pid 741383 00:13:56.004 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 741383 00:13:56.004 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 741383 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:56.262 00:13:56.262 real 0m10.962s 00:13:56.262 user 0m20.877s 00:13:56.262 sys 0m4.908s 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:56.262 ************************************ 00:13:56.262 END TEST nvmf_nvme_cli 00:13:56.262 ************************************ 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:56.262 ************************************ 00:13:56.262 START TEST nvmf_auth_target 00:13:56.262 ************************************ 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:13:56.262 * Looking for test storage... 00:13:56.262 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.262 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:56.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.263 --rc genhtml_branch_coverage=1 00:13:56.263 --rc genhtml_function_coverage=1 00:13:56.263 --rc genhtml_legend=1 00:13:56.263 --rc geninfo_all_blocks=1 00:13:56.263 --rc geninfo_unexecuted_blocks=1 00:13:56.263 00:13:56.263 ' 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:56.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.263 --rc genhtml_branch_coverage=1 00:13:56.263 --rc genhtml_function_coverage=1 00:13:56.263 --rc genhtml_legend=1 00:13:56.263 --rc geninfo_all_blocks=1 00:13:56.263 --rc geninfo_unexecuted_blocks=1 00:13:56.263 00:13:56.263 ' 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:56.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.263 --rc genhtml_branch_coverage=1 00:13:56.263 --rc genhtml_function_coverage=1 00:13:56.263 --rc genhtml_legend=1 00:13:56.263 --rc geninfo_all_blocks=1 00:13:56.263 --rc geninfo_unexecuted_blocks=1 00:13:56.263 00:13:56.263 ' 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:56.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.263 --rc genhtml_branch_coverage=1 00:13:56.263 --rc genhtml_function_coverage=1 00:13:56.263 --rc genhtml_legend=1 00:13:56.263 --rc geninfo_all_blocks=1 00:13:56.263 --rc geninfo_unexecuted_blocks=1 00:13:56.263 00:13:56.263 ' 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.263 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:56.264 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:56.264 04:01:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:01.535 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:01.535 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:01.535 Found net devices under 0000:18:00.0: mlx_0_0 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:01.535 Found net devices under 0000:18:00.1: mlx_0_1 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:01.535 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:01.536 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:01.536 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:14:01.536 altname enp24s0f0np0 00:14:01.536 altname ens785f0np0 00:14:01.536 inet 192.168.100.8/24 scope global mlx_0_0 00:14:01.536 valid_lft forever preferred_lft forever 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:01.536 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:01.536 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:14:01.536 altname enp24s0f1np1 00:14:01.536 altname ens785f1np1 00:14:01.536 inet 192.168.100.9/24 scope global mlx_0_1 00:14:01.536 valid_lft forever preferred_lft forever 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:01.536 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:01.537 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:01.795 192.168.100.9' 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:01.795 192.168.100.9' 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:01.795 192.168.100.9' 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=745482 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 745482 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 745482 ']' 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.795 04:01:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=745504 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e2589a6223a69ff4ba86b9175e563cc44f033acea0957465 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3Iw 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e2589a6223a69ff4ba86b9175e563cc44f033acea0957465 0 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e2589a6223a69ff4ba86b9175e563cc44f033acea0957465 0 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e2589a6223a69ff4ba86b9175e563cc44f033acea0957465 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3Iw 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3Iw 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.3Iw 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=145180f86537835437e1d5f07959fa908672c3fdf79d3a6fcbf405e3be32e152 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.iu8 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 145180f86537835437e1d5f07959fa908672c3fdf79d3a6fcbf405e3be32e152 3 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 145180f86537835437e1d5f07959fa908672c3fdf79d3a6fcbf405e3be32e152 3 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=145180f86537835437e1d5f07959fa908672c3fdf79d3a6fcbf405e3be32e152 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.iu8 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.iu8 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.iu8 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bb3b8b04762fde3d6573c5ed7a56b3b8 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.iBz 00:14:02.055 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bb3b8b04762fde3d6573c5ed7a56b3b8 1 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bb3b8b04762fde3d6573c5ed7a56b3b8 1 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bb3b8b04762fde3d6573c5ed7a56b3b8 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.iBz 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.iBz 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.iBz 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=67bdecda8b712a9ef6f204c469cc087dc211b21d35e6006d 00:14:02.056 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.i7Q 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 67bdecda8b712a9ef6f204c469cc087dc211b21d35e6006d 2 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 67bdecda8b712a9ef6f204c469cc087dc211b21d35e6006d 2 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=67bdecda8b712a9ef6f204c469cc087dc211b21d35e6006d 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.i7Q 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.i7Q 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.i7Q 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dab2c90096cde71911bd1a5ee827f80949ce53992aaee930 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.M5c 00:14:02.315 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dab2c90096cde71911bd1a5ee827f80949ce53992aaee930 2 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dab2c90096cde71911bd1a5ee827f80949ce53992aaee930 2 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dab2c90096cde71911bd1a5ee827f80949ce53992aaee930 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.M5c 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.M5c 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.M5c 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=43a80a7253a8496f227ebfe27a251360 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.cP0 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 43a80a7253a8496f227ebfe27a251360 1 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 43a80a7253a8496f227ebfe27a251360 1 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=43a80a7253a8496f227ebfe27a251360 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.cP0 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.cP0 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.cP0 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1f752d11d10bb5f4556a5aa7e039edd3a7eeb54c46c07ce8d835cc7fecb18481 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1JF 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1f752d11d10bb5f4556a5aa7e039edd3a7eeb54c46c07ce8d835cc7fecb18481 3 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1f752d11d10bb5f4556a5aa7e039edd3a7eeb54c46c07ce8d835cc7fecb18481 3 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1f752d11d10bb5f4556a5aa7e039edd3a7eeb54c46c07ce8d835cc7fecb18481 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1JF 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1JF 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.1JF 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 745482 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 745482 ']' 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.316 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.575 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.575 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:02.575 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 745504 /var/tmp/host.sock 00:14:02.575 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 745504 ']' 00:14:02.575 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:02.575 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.575 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:02.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:02.575 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.575 04:01:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.834 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.834 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:02.834 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:02.834 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.834 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.834 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.834 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:02.834 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3Iw 00:14:02.834 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.834 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.834 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.834 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.3Iw 00:14:02.834 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.3Iw 00:14:03.092 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.iu8 ]] 00:14:03.092 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iu8 00:14:03.092 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.092 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.092 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.092 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iu8 00:14:03.092 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iu8 00:14:03.092 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:03.092 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.iBz 00:14:03.092 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.092 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.351 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.351 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.iBz 00:14:03.351 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.iBz 00:14:03.351 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.i7Q ]] 00:14:03.351 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.i7Q 00:14:03.351 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.351 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.351 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.351 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.i7Q 00:14:03.351 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.i7Q 00:14:03.610 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:03.610 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.M5c 00:14:03.610 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.610 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.610 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.610 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.M5c 00:14:03.610 04:01:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.M5c 00:14:03.869 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.cP0 ]] 00:14:03.869 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cP0 00:14:03.869 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.869 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.869 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.869 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cP0 00:14:03.869 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cP0 00:14:03.869 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:03.869 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1JF 00:14:03.869 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.869 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.869 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.869 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.1JF 00:14:03.869 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.1JF 00:14:04.127 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:04.127 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:04.127 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.127 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.127 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:04.127 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:04.386 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:04.386 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.386 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:04.386 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:04.386 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:04.386 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.386 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.386 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.386 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.386 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.386 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.386 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.386 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.644 00:14:04.644 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.644 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.644 04:01:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.903 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.903 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.903 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.903 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.903 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.903 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.903 { 00:14:04.903 "cntlid": 1, 00:14:04.903 "qid": 0, 00:14:04.903 "state": "enabled", 00:14:04.903 "thread": "nvmf_tgt_poll_group_000", 00:14:04.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:04.903 "listen_address": { 00:14:04.903 "trtype": "RDMA", 00:14:04.903 "adrfam": "IPv4", 00:14:04.903 "traddr": "192.168.100.8", 00:14:04.903 "trsvcid": "4420" 00:14:04.903 }, 00:14:04.903 "peer_address": { 00:14:04.903 "trtype": "RDMA", 00:14:04.903 "adrfam": "IPv4", 00:14:04.903 "traddr": "192.168.100.8", 00:14:04.903 "trsvcid": "43056" 00:14:04.903 }, 00:14:04.903 "auth": { 00:14:04.903 "state": "completed", 00:14:04.903 "digest": "sha256", 00:14:04.903 "dhgroup": "null" 00:14:04.903 } 00:14:04.903 } 00:14:04.903 ]' 00:14:04.903 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.903 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.903 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.903 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:04.903 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.903 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.903 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.903 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.162 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:05.162 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:05.728 04:01:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.728 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:05.728 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.728 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.728 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.728 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.728 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:05.728 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:05.986 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:05.986 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.986 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:05.986 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:05.986 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:05.986 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.986 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.986 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.986 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.986 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.986 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.986 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.986 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.245 00:14:06.245 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.245 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.245 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.503 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.503 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.503 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.503 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.503 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.503 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.503 { 00:14:06.503 "cntlid": 3, 00:14:06.503 "qid": 0, 00:14:06.503 "state": "enabled", 00:14:06.503 "thread": "nvmf_tgt_poll_group_000", 00:14:06.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:06.503 "listen_address": { 00:14:06.503 "trtype": "RDMA", 00:14:06.503 "adrfam": "IPv4", 00:14:06.503 "traddr": "192.168.100.8", 00:14:06.503 "trsvcid": "4420" 00:14:06.503 }, 00:14:06.503 "peer_address": { 00:14:06.503 "trtype": "RDMA", 00:14:06.503 "adrfam": "IPv4", 00:14:06.503 "traddr": "192.168.100.8", 00:14:06.503 "trsvcid": "59453" 00:14:06.503 }, 00:14:06.503 "auth": { 00:14:06.503 "state": "completed", 00:14:06.503 "digest": "sha256", 00:14:06.503 "dhgroup": "null" 00:14:06.503 } 00:14:06.503 } 00:14:06.503 ]' 00:14:06.503 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.503 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:06.503 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.503 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:06.503 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.503 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.503 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.503 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.762 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:06.762 04:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:07.328 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.328 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:07.328 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.328 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.328 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.328 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.328 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:07.328 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:07.586 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:07.586 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.586 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:07.586 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:07.586 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:07.586 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.586 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.586 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.586 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.586 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.586 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.586 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.586 04:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.845 00:14:07.845 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.845 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.845 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.103 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.103 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.103 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.104 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.104 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.104 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.104 { 00:14:08.104 "cntlid": 5, 00:14:08.104 "qid": 0, 00:14:08.104 "state": "enabled", 00:14:08.104 "thread": "nvmf_tgt_poll_group_000", 00:14:08.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:08.104 "listen_address": { 00:14:08.104 "trtype": "RDMA", 00:14:08.104 "adrfam": "IPv4", 00:14:08.104 "traddr": "192.168.100.8", 00:14:08.104 "trsvcid": "4420" 00:14:08.104 }, 00:14:08.104 "peer_address": { 00:14:08.104 "trtype": "RDMA", 00:14:08.104 "adrfam": "IPv4", 00:14:08.104 "traddr": "192.168.100.8", 00:14:08.104 "trsvcid": "41481" 00:14:08.104 }, 00:14:08.104 "auth": { 00:14:08.104 "state": "completed", 00:14:08.104 "digest": "sha256", 00:14:08.104 "dhgroup": "null" 00:14:08.104 } 00:14:08.104 } 00:14:08.104 ]' 00:14:08.104 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.104 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.104 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.104 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:08.104 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.104 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.104 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.104 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.362 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:08.362 04:02:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:08.928 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.187 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.446 00:14:09.446 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.446 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.446 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.704 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.704 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.704 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.704 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.704 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.704 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.704 { 00:14:09.704 "cntlid": 7, 00:14:09.704 "qid": 0, 00:14:09.704 "state": "enabled", 00:14:09.704 "thread": "nvmf_tgt_poll_group_000", 00:14:09.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:09.704 "listen_address": { 00:14:09.704 "trtype": "RDMA", 00:14:09.704 "adrfam": "IPv4", 00:14:09.704 "traddr": "192.168.100.8", 00:14:09.704 "trsvcid": "4420" 00:14:09.704 }, 00:14:09.704 "peer_address": { 00:14:09.704 "trtype": "RDMA", 00:14:09.704 "adrfam": "IPv4", 00:14:09.704 "traddr": "192.168.100.8", 00:14:09.704 "trsvcid": "56323" 00:14:09.704 }, 00:14:09.704 "auth": { 00:14:09.704 "state": "completed", 00:14:09.704 "digest": "sha256", 00:14:09.704 "dhgroup": "null" 00:14:09.704 } 00:14:09.704 } 00:14:09.704 ]' 00:14:09.704 04:02:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.704 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.704 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.704 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:09.704 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.963 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.963 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.963 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.963 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:09.963 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:10.530 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.788 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:10.788 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.788 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.788 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.788 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.788 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.788 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:10.788 04:02:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:10.788 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:10.788 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.788 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:10.788 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:10.788 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:10.788 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.788 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.788 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.788 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.788 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.788 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.788 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.788 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.046 00:14:11.046 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.046 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.046 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.304 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.304 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.304 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.304 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.304 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.304 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.304 { 00:14:11.304 "cntlid": 9, 00:14:11.304 "qid": 0, 00:14:11.304 "state": "enabled", 00:14:11.304 "thread": "nvmf_tgt_poll_group_000", 00:14:11.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:11.304 "listen_address": { 00:14:11.304 "trtype": "RDMA", 00:14:11.304 "adrfam": "IPv4", 00:14:11.304 "traddr": "192.168.100.8", 00:14:11.304 "trsvcid": "4420" 00:14:11.304 }, 00:14:11.304 "peer_address": { 00:14:11.304 "trtype": "RDMA", 00:14:11.304 "adrfam": "IPv4", 00:14:11.304 "traddr": "192.168.100.8", 00:14:11.304 "trsvcid": "58433" 00:14:11.304 }, 00:14:11.304 "auth": { 00:14:11.304 "state": "completed", 00:14:11.304 "digest": "sha256", 00:14:11.304 "dhgroup": "ffdhe2048" 00:14:11.304 } 00:14:11.304 } 00:14:11.304 ]' 00:14:11.304 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.304 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.304 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.304 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:11.304 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.563 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.563 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.563 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.563 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:11.563 04:02:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:12.129 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.388 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:12.388 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.388 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.388 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.388 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.388 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:12.388 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:12.646 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:12.646 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.646 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:12.646 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:12.646 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:12.646 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.646 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.646 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.646 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.646 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.646 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.646 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.646 04:02:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.646 00:14:12.905 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.905 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.905 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.905 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.905 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.905 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.905 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.905 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.905 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.905 { 00:14:12.905 "cntlid": 11, 00:14:12.905 "qid": 0, 00:14:12.905 "state": "enabled", 00:14:12.905 "thread": "nvmf_tgt_poll_group_000", 00:14:12.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:12.905 "listen_address": { 00:14:12.905 "trtype": "RDMA", 00:14:12.905 "adrfam": "IPv4", 00:14:12.905 "traddr": "192.168.100.8", 00:14:12.905 "trsvcid": "4420" 00:14:12.905 }, 00:14:12.905 "peer_address": { 00:14:12.905 "trtype": "RDMA", 00:14:12.905 "adrfam": "IPv4", 00:14:12.905 "traddr": "192.168.100.8", 00:14:12.905 "trsvcid": "43579" 00:14:12.905 }, 00:14:12.905 "auth": { 00:14:12.905 "state": "completed", 00:14:12.905 "digest": "sha256", 00:14:12.905 "dhgroup": "ffdhe2048" 00:14:12.905 } 00:14:12.905 } 00:14:12.905 ]' 00:14:12.905 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.905 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:12.905 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.164 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:13.164 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.164 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.164 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.164 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.164 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:13.164 04:02:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:13.731 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.989 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:13.989 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.989 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.989 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.989 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.989 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:13.989 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:13.989 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:13.989 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.989 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:13.989 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:13.989 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:13.990 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.990 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.990 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.990 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.248 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.248 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.248 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.248 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.248 00:14:14.248 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.248 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.248 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.507 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.507 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.507 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.507 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.507 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.507 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.507 { 00:14:14.507 "cntlid": 13, 00:14:14.507 "qid": 0, 00:14:14.507 "state": "enabled", 00:14:14.507 "thread": "nvmf_tgt_poll_group_000", 00:14:14.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:14.507 "listen_address": { 00:14:14.507 "trtype": "RDMA", 00:14:14.507 "adrfam": "IPv4", 00:14:14.507 "traddr": "192.168.100.8", 00:14:14.507 "trsvcid": "4420" 00:14:14.507 }, 00:14:14.507 "peer_address": { 00:14:14.507 "trtype": "RDMA", 00:14:14.507 "adrfam": "IPv4", 00:14:14.507 "traddr": "192.168.100.8", 00:14:14.507 "trsvcid": "35749" 00:14:14.507 }, 00:14:14.507 "auth": { 00:14:14.507 "state": "completed", 00:14:14.507 "digest": "sha256", 00:14:14.507 "dhgroup": "ffdhe2048" 00:14:14.507 } 00:14:14.507 } 00:14:14.507 ]' 00:14:14.507 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.507 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.507 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.507 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:14.507 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.765 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.765 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.765 04:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.765 04:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:14.765 04:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:15.332 04:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.590 04:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:15.590 04:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.590 04:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.590 04:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.590 04:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.590 04:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:15.590 04:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:15.849 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:15.849 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.849 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:15.849 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:15.849 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:15.849 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.849 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:15.849 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.849 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.849 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.849 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:15.849 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.849 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.107 00:14:16.107 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.107 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.107 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.107 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.107 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.107 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.107 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.107 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.107 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.107 { 00:14:16.107 "cntlid": 15, 00:14:16.107 "qid": 0, 00:14:16.107 "state": "enabled", 00:14:16.107 "thread": "nvmf_tgt_poll_group_000", 00:14:16.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:16.107 "listen_address": { 00:14:16.107 "trtype": "RDMA", 00:14:16.107 "adrfam": "IPv4", 00:14:16.107 "traddr": "192.168.100.8", 00:14:16.107 "trsvcid": "4420" 00:14:16.107 }, 00:14:16.107 "peer_address": { 00:14:16.107 "trtype": "RDMA", 00:14:16.107 "adrfam": "IPv4", 00:14:16.107 "traddr": "192.168.100.8", 00:14:16.107 "trsvcid": "40988" 00:14:16.107 }, 00:14:16.107 "auth": { 00:14:16.107 "state": "completed", 00:14:16.107 "digest": "sha256", 00:14:16.107 "dhgroup": "ffdhe2048" 00:14:16.107 } 00:14:16.107 } 00:14:16.107 ]' 00:14:16.107 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.107 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.107 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.366 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:16.366 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.366 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.366 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.366 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.625 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:16.625 04:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:17.192 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.192 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:17.192 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.192 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.192 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.192 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:17.192 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.192 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:17.192 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:17.452 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:17.452 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.452 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:17.452 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:17.452 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:17.452 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.452 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.452 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.452 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.452 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.452 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.452 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.452 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.710 00:14:17.711 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.711 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.711 04:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.711 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.711 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.711 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.711 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.711 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.711 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.711 { 00:14:17.711 "cntlid": 17, 00:14:17.711 "qid": 0, 00:14:17.711 "state": "enabled", 00:14:17.711 "thread": "nvmf_tgt_poll_group_000", 00:14:17.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:17.711 "listen_address": { 00:14:17.711 "trtype": "RDMA", 00:14:17.711 "adrfam": "IPv4", 00:14:17.711 "traddr": "192.168.100.8", 00:14:17.711 "trsvcid": "4420" 00:14:17.711 }, 00:14:17.711 "peer_address": { 00:14:17.711 "trtype": "RDMA", 00:14:17.711 "adrfam": "IPv4", 00:14:17.711 "traddr": "192.168.100.8", 00:14:17.711 "trsvcid": "53620" 00:14:17.711 }, 00:14:17.711 "auth": { 00:14:17.711 "state": "completed", 00:14:17.711 "digest": "sha256", 00:14:17.711 "dhgroup": "ffdhe3072" 00:14:17.711 } 00:14:17.711 } 00:14:17.711 ]' 00:14:18.004 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.004 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.004 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.004 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:18.004 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.004 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.004 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.004 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.004 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:18.005 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:18.937 04:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.937 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.195 00:14:19.195 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.195 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.195 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.452 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.452 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.452 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.452 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.452 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.452 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.452 { 00:14:19.452 "cntlid": 19, 00:14:19.452 "qid": 0, 00:14:19.452 "state": "enabled", 00:14:19.452 "thread": "nvmf_tgt_poll_group_000", 00:14:19.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:19.452 "listen_address": { 00:14:19.452 "trtype": "RDMA", 00:14:19.452 "adrfam": "IPv4", 00:14:19.452 "traddr": "192.168.100.8", 00:14:19.452 "trsvcid": "4420" 00:14:19.452 }, 00:14:19.452 "peer_address": { 00:14:19.452 "trtype": "RDMA", 00:14:19.452 "adrfam": "IPv4", 00:14:19.452 "traddr": "192.168.100.8", 00:14:19.452 "trsvcid": "42584" 00:14:19.452 }, 00:14:19.452 "auth": { 00:14:19.452 "state": "completed", 00:14:19.452 "digest": "sha256", 00:14:19.452 "dhgroup": "ffdhe3072" 00:14:19.452 } 00:14:19.452 } 00:14:19.452 ]' 00:14:19.452 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.452 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.452 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.452 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:19.452 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.710 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.710 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.710 04:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.710 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:19.710 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:20.274 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.531 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:20.531 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.531 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.531 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.531 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.531 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:20.531 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:20.789 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:20.789 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.789 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:20.789 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:20.789 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:20.789 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.789 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.789 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.789 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.789 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.789 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.789 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.789 04:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.046 00:14:21.046 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.046 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.046 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.046 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.046 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.046 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.046 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.303 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.303 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.303 { 00:14:21.303 "cntlid": 21, 00:14:21.303 "qid": 0, 00:14:21.303 "state": "enabled", 00:14:21.303 "thread": "nvmf_tgt_poll_group_000", 00:14:21.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:21.303 "listen_address": { 00:14:21.303 "trtype": "RDMA", 00:14:21.303 "adrfam": "IPv4", 00:14:21.303 "traddr": "192.168.100.8", 00:14:21.303 "trsvcid": "4420" 00:14:21.303 }, 00:14:21.303 "peer_address": { 00:14:21.303 "trtype": "RDMA", 00:14:21.303 "adrfam": "IPv4", 00:14:21.303 "traddr": "192.168.100.8", 00:14:21.303 "trsvcid": "47971" 00:14:21.303 }, 00:14:21.303 "auth": { 00:14:21.303 "state": "completed", 00:14:21.303 "digest": "sha256", 00:14:21.303 "dhgroup": "ffdhe3072" 00:14:21.303 } 00:14:21.303 } 00:14:21.303 ]' 00:14:21.303 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.303 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.303 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.303 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:21.303 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.303 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.303 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.303 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.560 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:21.560 04:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:22.123 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.123 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:22.123 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.123 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.123 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.123 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.123 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:22.123 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:22.381 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:22.381 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.381 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:22.381 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:22.381 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:22.381 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.381 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:22.381 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.381 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.381 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.381 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:22.381 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:22.381 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:22.639 00:14:22.639 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.639 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.639 04:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.896 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.896 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.896 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.896 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.896 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.896 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.896 { 00:14:22.896 "cntlid": 23, 00:14:22.896 "qid": 0, 00:14:22.896 "state": "enabled", 00:14:22.896 "thread": "nvmf_tgt_poll_group_000", 00:14:22.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:22.896 "listen_address": { 00:14:22.896 "trtype": "RDMA", 00:14:22.896 "adrfam": "IPv4", 00:14:22.896 "traddr": "192.168.100.8", 00:14:22.896 "trsvcid": "4420" 00:14:22.896 }, 00:14:22.896 "peer_address": { 00:14:22.896 "trtype": "RDMA", 00:14:22.896 "adrfam": "IPv4", 00:14:22.896 "traddr": "192.168.100.8", 00:14:22.896 "trsvcid": "57545" 00:14:22.896 }, 00:14:22.896 "auth": { 00:14:22.896 "state": "completed", 00:14:22.896 "digest": "sha256", 00:14:22.896 "dhgroup": "ffdhe3072" 00:14:22.896 } 00:14:22.896 } 00:14:22.896 ]' 00:14:22.896 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.896 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.896 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.896 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:22.896 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.896 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.896 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.896 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.153 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:23.153 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:23.718 04:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.718 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:23.718 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.718 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.718 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.718 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.718 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.718 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:23.718 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:23.975 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:23.975 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.975 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:23.975 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:23.975 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:23.975 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.975 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.975 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.976 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.976 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.976 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.976 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.976 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.232 00:14:24.232 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.232 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.232 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.489 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.490 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.490 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.490 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.490 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.490 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.490 { 00:14:24.490 "cntlid": 25, 00:14:24.490 "qid": 0, 00:14:24.490 "state": "enabled", 00:14:24.490 "thread": "nvmf_tgt_poll_group_000", 00:14:24.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:24.490 "listen_address": { 00:14:24.490 "trtype": "RDMA", 00:14:24.490 "adrfam": "IPv4", 00:14:24.490 "traddr": "192.168.100.8", 00:14:24.490 "trsvcid": "4420" 00:14:24.490 }, 00:14:24.490 "peer_address": { 00:14:24.490 "trtype": "RDMA", 00:14:24.490 "adrfam": "IPv4", 00:14:24.490 "traddr": "192.168.100.8", 00:14:24.490 "trsvcid": "46740" 00:14:24.490 }, 00:14:24.490 "auth": { 00:14:24.490 "state": "completed", 00:14:24.490 "digest": "sha256", 00:14:24.490 "dhgroup": "ffdhe4096" 00:14:24.490 } 00:14:24.490 } 00:14:24.490 ]' 00:14:24.490 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.490 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.490 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.490 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:24.490 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.747 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.747 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.747 04:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.747 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:24.747 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:25.311 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.568 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:25.568 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.568 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.568 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.568 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.568 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:25.568 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:25.825 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:25.825 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.825 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.825 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:25.825 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:25.825 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.825 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.825 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.825 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.825 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.825 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.825 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.825 04:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.082 00:14:26.082 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.082 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.082 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.082 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.082 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.082 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.082 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.082 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.082 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.082 { 00:14:26.082 "cntlid": 27, 00:14:26.082 "qid": 0, 00:14:26.082 "state": "enabled", 00:14:26.082 "thread": "nvmf_tgt_poll_group_000", 00:14:26.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:26.082 "listen_address": { 00:14:26.082 "trtype": "RDMA", 00:14:26.082 "adrfam": "IPv4", 00:14:26.082 "traddr": "192.168.100.8", 00:14:26.082 "trsvcid": "4420" 00:14:26.082 }, 00:14:26.082 "peer_address": { 00:14:26.082 "trtype": "RDMA", 00:14:26.082 "adrfam": "IPv4", 00:14:26.082 "traddr": "192.168.100.8", 00:14:26.082 "trsvcid": "59968" 00:14:26.082 }, 00:14:26.082 "auth": { 00:14:26.082 "state": "completed", 00:14:26.082 "digest": "sha256", 00:14:26.082 "dhgroup": "ffdhe4096" 00:14:26.082 } 00:14:26.082 } 00:14:26.082 ]' 00:14:26.082 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.340 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.340 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.340 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:26.340 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.340 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.340 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.340 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.597 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:26.597 04:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:27.159 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.159 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:27.159 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.159 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.159 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.159 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.160 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:27.160 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:27.417 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:27.417 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.417 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.417 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:27.417 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:27.417 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.417 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.417 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.417 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.417 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.417 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.417 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.417 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.674 00:14:27.674 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.674 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.674 04:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.932 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.932 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.932 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.932 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.932 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.932 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.932 { 00:14:27.932 "cntlid": 29, 00:14:27.932 "qid": 0, 00:14:27.932 "state": "enabled", 00:14:27.932 "thread": "nvmf_tgt_poll_group_000", 00:14:27.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:27.932 "listen_address": { 00:14:27.932 "trtype": "RDMA", 00:14:27.932 "adrfam": "IPv4", 00:14:27.932 "traddr": "192.168.100.8", 00:14:27.932 "trsvcid": "4420" 00:14:27.932 }, 00:14:27.932 "peer_address": { 00:14:27.932 "trtype": "RDMA", 00:14:27.932 "adrfam": "IPv4", 00:14:27.932 "traddr": "192.168.100.8", 00:14:27.932 "trsvcid": "40317" 00:14:27.932 }, 00:14:27.932 "auth": { 00:14:27.932 "state": "completed", 00:14:27.932 "digest": "sha256", 00:14:27.932 "dhgroup": "ffdhe4096" 00:14:27.932 } 00:14:27.932 } 00:14:27.932 ]' 00:14:27.932 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.932 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.932 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.932 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:27.932 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.932 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.932 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.932 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.189 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:28.189 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:28.753 04:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.753 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:28.753 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.753 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.753 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.753 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.753 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:28.753 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:29.010 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:29.010 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.010 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:29.010 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:29.010 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:29.010 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.010 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:29.010 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.010 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.010 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.010 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:29.011 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.011 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.268 00:14:29.268 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.268 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.268 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.525 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.525 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.525 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.525 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.525 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.525 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.525 { 00:14:29.525 "cntlid": 31, 00:14:29.525 "qid": 0, 00:14:29.525 "state": "enabled", 00:14:29.525 "thread": "nvmf_tgt_poll_group_000", 00:14:29.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:29.525 "listen_address": { 00:14:29.525 "trtype": "RDMA", 00:14:29.525 "adrfam": "IPv4", 00:14:29.525 "traddr": "192.168.100.8", 00:14:29.525 "trsvcid": "4420" 00:14:29.525 }, 00:14:29.525 "peer_address": { 00:14:29.525 "trtype": "RDMA", 00:14:29.525 "adrfam": "IPv4", 00:14:29.525 "traddr": "192.168.100.8", 00:14:29.525 "trsvcid": "43995" 00:14:29.525 }, 00:14:29.526 "auth": { 00:14:29.526 "state": "completed", 00:14:29.526 "digest": "sha256", 00:14:29.526 "dhgroup": "ffdhe4096" 00:14:29.526 } 00:14:29.526 } 00:14:29.526 ]' 00:14:29.526 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.526 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.526 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.526 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:29.526 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.526 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.526 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.526 04:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.783 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:29.783 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:30.347 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.605 04:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.169 00:14:31.169 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.169 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.169 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.169 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.169 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.169 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.169 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.169 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.169 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.169 { 00:14:31.169 "cntlid": 33, 00:14:31.169 "qid": 0, 00:14:31.169 "state": "enabled", 00:14:31.169 "thread": "nvmf_tgt_poll_group_000", 00:14:31.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:31.169 "listen_address": { 00:14:31.169 "trtype": "RDMA", 00:14:31.169 "adrfam": "IPv4", 00:14:31.169 "traddr": "192.168.100.8", 00:14:31.169 "trsvcid": "4420" 00:14:31.169 }, 00:14:31.169 "peer_address": { 00:14:31.169 "trtype": "RDMA", 00:14:31.169 "adrfam": "IPv4", 00:14:31.169 "traddr": "192.168.100.8", 00:14:31.169 "trsvcid": "44951" 00:14:31.169 }, 00:14:31.169 "auth": { 00:14:31.169 "state": "completed", 00:14:31.169 "digest": "sha256", 00:14:31.169 "dhgroup": "ffdhe6144" 00:14:31.169 } 00:14:31.169 } 00:14:31.169 ]' 00:14:31.169 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.169 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.169 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.169 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:31.169 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.426 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.426 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.426 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.426 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:31.426 04:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:32.067 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.360 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.361 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.361 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.361 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.361 04:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.618 00:14:32.875 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.875 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.875 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.875 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.875 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.875 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.875 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.875 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.876 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.876 { 00:14:32.876 "cntlid": 35, 00:14:32.876 "qid": 0, 00:14:32.876 "state": "enabled", 00:14:32.876 "thread": "nvmf_tgt_poll_group_000", 00:14:32.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:32.876 "listen_address": { 00:14:32.876 "trtype": "RDMA", 00:14:32.876 "adrfam": "IPv4", 00:14:32.876 "traddr": "192.168.100.8", 00:14:32.876 "trsvcid": "4420" 00:14:32.876 }, 00:14:32.876 "peer_address": { 00:14:32.876 "trtype": "RDMA", 00:14:32.876 "adrfam": "IPv4", 00:14:32.876 "traddr": "192.168.100.8", 00:14:32.876 "trsvcid": "49097" 00:14:32.876 }, 00:14:32.876 "auth": { 00:14:32.876 "state": "completed", 00:14:32.876 "digest": "sha256", 00:14:32.876 "dhgroup": "ffdhe6144" 00:14:32.876 } 00:14:32.876 } 00:14:32.876 ]' 00:14:32.876 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.876 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.876 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.133 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:33.133 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.133 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.133 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.133 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.133 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:33.133 04:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:33.697 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.954 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:33.954 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.954 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.954 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.954 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.954 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.954 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:34.212 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:34.212 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.212 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:34.212 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:34.212 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:34.212 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.212 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.212 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.212 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.212 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.212 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.212 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.212 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.469 00:14:34.469 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.469 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.469 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.727 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.727 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.727 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.727 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.727 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.727 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.727 { 00:14:34.727 "cntlid": 37, 00:14:34.727 "qid": 0, 00:14:34.727 "state": "enabled", 00:14:34.727 "thread": "nvmf_tgt_poll_group_000", 00:14:34.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:34.727 "listen_address": { 00:14:34.727 "trtype": "RDMA", 00:14:34.727 "adrfam": "IPv4", 00:14:34.727 "traddr": "192.168.100.8", 00:14:34.727 "trsvcid": "4420" 00:14:34.727 }, 00:14:34.727 "peer_address": { 00:14:34.727 "trtype": "RDMA", 00:14:34.727 "adrfam": "IPv4", 00:14:34.727 "traddr": "192.168.100.8", 00:14:34.727 "trsvcid": "47607" 00:14:34.727 }, 00:14:34.727 "auth": { 00:14:34.727 "state": "completed", 00:14:34.727 "digest": "sha256", 00:14:34.727 "dhgroup": "ffdhe6144" 00:14:34.727 } 00:14:34.727 } 00:14:34.727 ]' 00:14:34.727 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.727 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.727 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.727 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:34.727 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.727 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.727 04:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.727 04:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.984 04:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:34.984 04:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:35.547 04:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.547 04:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:35.547 04:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.547 04:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.547 04:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.547 04:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.547 04:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:35.547 04:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:35.805 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:35.805 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.805 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:35.805 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:35.805 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:35.805 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.805 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:35.805 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.805 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.805 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.805 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:35.805 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.805 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.061 00:14:36.061 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.061 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.061 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.318 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.319 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.319 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.319 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.319 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.319 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.319 { 00:14:36.319 "cntlid": 39, 00:14:36.319 "qid": 0, 00:14:36.319 "state": "enabled", 00:14:36.319 "thread": "nvmf_tgt_poll_group_000", 00:14:36.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:36.319 "listen_address": { 00:14:36.319 "trtype": "RDMA", 00:14:36.319 "adrfam": "IPv4", 00:14:36.319 "traddr": "192.168.100.8", 00:14:36.319 "trsvcid": "4420" 00:14:36.319 }, 00:14:36.319 "peer_address": { 00:14:36.319 "trtype": "RDMA", 00:14:36.319 "adrfam": "IPv4", 00:14:36.319 "traddr": "192.168.100.8", 00:14:36.319 "trsvcid": "60043" 00:14:36.319 }, 00:14:36.319 "auth": { 00:14:36.319 "state": "completed", 00:14:36.319 "digest": "sha256", 00:14:36.319 "dhgroup": "ffdhe6144" 00:14:36.319 } 00:14:36.319 } 00:14:36.319 ]' 00:14:36.319 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.319 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.319 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.576 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:36.576 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.576 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.576 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.576 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.576 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:36.576 04:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:37.138 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.395 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:37.395 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.395 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.395 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.395 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:37.395 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.395 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:37.395 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:37.652 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:37.652 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.652 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:37.652 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:37.652 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:37.652 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.652 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.652 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.652 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.652 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.652 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.652 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.652 04:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.909 00:14:37.909 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.909 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.909 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.166 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.166 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.166 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.166 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.166 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.166 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.166 { 00:14:38.166 "cntlid": 41, 00:14:38.166 "qid": 0, 00:14:38.166 "state": "enabled", 00:14:38.166 "thread": "nvmf_tgt_poll_group_000", 00:14:38.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:38.166 "listen_address": { 00:14:38.166 "trtype": "RDMA", 00:14:38.166 "adrfam": "IPv4", 00:14:38.166 "traddr": "192.168.100.8", 00:14:38.166 "trsvcid": "4420" 00:14:38.166 }, 00:14:38.166 "peer_address": { 00:14:38.166 "trtype": "RDMA", 00:14:38.166 "adrfam": "IPv4", 00:14:38.166 "traddr": "192.168.100.8", 00:14:38.166 "trsvcid": "34602" 00:14:38.166 }, 00:14:38.166 "auth": { 00:14:38.166 "state": "completed", 00:14:38.166 "digest": "sha256", 00:14:38.166 "dhgroup": "ffdhe8192" 00:14:38.166 } 00:14:38.166 } 00:14:38.166 ]' 00:14:38.166 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.167 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.167 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.167 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:38.167 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.424 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.424 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.424 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.424 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:38.424 04:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:38.987 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.243 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:39.243 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.243 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.243 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.243 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.243 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:39.243 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:39.501 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:39.501 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.501 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:39.501 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:39.501 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:39.501 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.501 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.501 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.501 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.501 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.501 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.501 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.501 04:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.758 00:14:39.758 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.758 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.758 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.015 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.016 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.016 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.016 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.016 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.016 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.016 { 00:14:40.016 "cntlid": 43, 00:14:40.016 "qid": 0, 00:14:40.016 "state": "enabled", 00:14:40.016 "thread": "nvmf_tgt_poll_group_000", 00:14:40.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:40.016 "listen_address": { 00:14:40.016 "trtype": "RDMA", 00:14:40.016 "adrfam": "IPv4", 00:14:40.016 "traddr": "192.168.100.8", 00:14:40.016 "trsvcid": "4420" 00:14:40.016 }, 00:14:40.016 "peer_address": { 00:14:40.016 "trtype": "RDMA", 00:14:40.016 "adrfam": "IPv4", 00:14:40.016 "traddr": "192.168.100.8", 00:14:40.016 "trsvcid": "56186" 00:14:40.016 }, 00:14:40.016 "auth": { 00:14:40.016 "state": "completed", 00:14:40.016 "digest": "sha256", 00:14:40.016 "dhgroup": "ffdhe8192" 00:14:40.016 } 00:14:40.016 } 00:14:40.016 ]' 00:14:40.016 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.016 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.016 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.016 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:40.016 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.274 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.274 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.274 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.274 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:40.274 04:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:40.841 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.101 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:41.101 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.101 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.101 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.101 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.101 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:41.101 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:41.359 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:41.359 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.359 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.359 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:41.359 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:41.359 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.359 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.359 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.359 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.359 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.359 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.359 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.359 04:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.617 00:14:41.875 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.875 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.875 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.875 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.875 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.875 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.875 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.875 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.875 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.875 { 00:14:41.875 "cntlid": 45, 00:14:41.875 "qid": 0, 00:14:41.875 "state": "enabled", 00:14:41.875 "thread": "nvmf_tgt_poll_group_000", 00:14:41.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:41.875 "listen_address": { 00:14:41.875 "trtype": "RDMA", 00:14:41.875 "adrfam": "IPv4", 00:14:41.875 "traddr": "192.168.100.8", 00:14:41.875 "trsvcid": "4420" 00:14:41.875 }, 00:14:41.875 "peer_address": { 00:14:41.875 "trtype": "RDMA", 00:14:41.875 "adrfam": "IPv4", 00:14:41.875 "traddr": "192.168.100.8", 00:14:41.875 "trsvcid": "41629" 00:14:41.875 }, 00:14:41.875 "auth": { 00:14:41.875 "state": "completed", 00:14:41.875 "digest": "sha256", 00:14:41.875 "dhgroup": "ffdhe8192" 00:14:41.875 } 00:14:41.875 } 00:14:41.875 ]' 00:14:41.875 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.875 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.875 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.133 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:42.133 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.133 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.133 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.133 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.133 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:42.133 04:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:42.699 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.958 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:42.958 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.958 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.958 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.958 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.958 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:42.958 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.216 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:43.216 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.216 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.216 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:43.216 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:43.216 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.216 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:43.216 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.216 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.216 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.216 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:43.216 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.216 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.474 00:14:43.733 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.733 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.733 04:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.733 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.733 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.733 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.733 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.733 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.733 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.733 { 00:14:43.733 "cntlid": 47, 00:14:43.733 "qid": 0, 00:14:43.733 "state": "enabled", 00:14:43.733 "thread": "nvmf_tgt_poll_group_000", 00:14:43.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:43.733 "listen_address": { 00:14:43.733 "trtype": "RDMA", 00:14:43.733 "adrfam": "IPv4", 00:14:43.733 "traddr": "192.168.100.8", 00:14:43.733 "trsvcid": "4420" 00:14:43.733 }, 00:14:43.733 "peer_address": { 00:14:43.733 "trtype": "RDMA", 00:14:43.733 "adrfam": "IPv4", 00:14:43.733 "traddr": "192.168.100.8", 00:14:43.733 "trsvcid": "44949" 00:14:43.733 }, 00:14:43.733 "auth": { 00:14:43.733 "state": "completed", 00:14:43.733 "digest": "sha256", 00:14:43.733 "dhgroup": "ffdhe8192" 00:14:43.733 } 00:14:43.733 } 00:14:43.733 ]' 00:14:43.733 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.733 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.733 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.991 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:43.991 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.991 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.991 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.991 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.991 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:43.991 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:44.926 04:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.926 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.184 00:14:45.184 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.184 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.184 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.441 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.441 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.441 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.441 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.441 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.441 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.441 { 00:14:45.441 "cntlid": 49, 00:14:45.441 "qid": 0, 00:14:45.441 "state": "enabled", 00:14:45.441 "thread": "nvmf_tgt_poll_group_000", 00:14:45.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:45.441 "listen_address": { 00:14:45.441 "trtype": "RDMA", 00:14:45.441 "adrfam": "IPv4", 00:14:45.441 "traddr": "192.168.100.8", 00:14:45.441 "trsvcid": "4420" 00:14:45.441 }, 00:14:45.442 "peer_address": { 00:14:45.442 "trtype": "RDMA", 00:14:45.442 "adrfam": "IPv4", 00:14:45.442 "traddr": "192.168.100.8", 00:14:45.442 "trsvcid": "37217" 00:14:45.442 }, 00:14:45.442 "auth": { 00:14:45.442 "state": "completed", 00:14:45.442 "digest": "sha384", 00:14:45.442 "dhgroup": "null" 00:14:45.442 } 00:14:45.442 } 00:14:45.442 ]' 00:14:45.442 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.442 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:45.442 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.442 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:45.442 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.442 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.442 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.442 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.699 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:45.699 04:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:46.263 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.521 04:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.779 00:14:46.779 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.779 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.779 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.036 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.036 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.036 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.036 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.036 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.036 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.036 { 00:14:47.036 "cntlid": 51, 00:14:47.036 "qid": 0, 00:14:47.036 "state": "enabled", 00:14:47.036 "thread": "nvmf_tgt_poll_group_000", 00:14:47.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:47.036 "listen_address": { 00:14:47.036 "trtype": "RDMA", 00:14:47.036 "adrfam": "IPv4", 00:14:47.036 "traddr": "192.168.100.8", 00:14:47.036 "trsvcid": "4420" 00:14:47.036 }, 00:14:47.036 "peer_address": { 00:14:47.036 "trtype": "RDMA", 00:14:47.036 "adrfam": "IPv4", 00:14:47.036 "traddr": "192.168.100.8", 00:14:47.036 "trsvcid": "33726" 00:14:47.036 }, 00:14:47.036 "auth": { 00:14:47.036 "state": "completed", 00:14:47.036 "digest": "sha384", 00:14:47.036 "dhgroup": "null" 00:14:47.036 } 00:14:47.036 } 00:14:47.036 ]' 00:14:47.036 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.036 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:47.036 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.036 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:47.036 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.293 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.293 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.293 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.293 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:47.293 04:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:47.859 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.116 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:48.116 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.116 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.116 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.116 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.116 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:48.116 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:48.374 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:48.374 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.374 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:48.374 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:48.374 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:48.374 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.374 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.374 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.374 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.374 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.374 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.374 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.374 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.631 00:14:48.631 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.631 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.631 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.631 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.631 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.631 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.632 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.632 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.632 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.632 { 00:14:48.632 "cntlid": 53, 00:14:48.632 "qid": 0, 00:14:48.632 "state": "enabled", 00:14:48.632 "thread": "nvmf_tgt_poll_group_000", 00:14:48.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:48.632 "listen_address": { 00:14:48.632 "trtype": "RDMA", 00:14:48.632 "adrfam": "IPv4", 00:14:48.632 "traddr": "192.168.100.8", 00:14:48.632 "trsvcid": "4420" 00:14:48.632 }, 00:14:48.632 "peer_address": { 00:14:48.632 "trtype": "RDMA", 00:14:48.632 "adrfam": "IPv4", 00:14:48.632 "traddr": "192.168.100.8", 00:14:48.632 "trsvcid": "48720" 00:14:48.632 }, 00:14:48.632 "auth": { 00:14:48.632 "state": "completed", 00:14:48.632 "digest": "sha384", 00:14:48.632 "dhgroup": "null" 00:14:48.632 } 00:14:48.632 } 00:14:48.632 ]' 00:14:48.632 04:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.632 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:48.632 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.889 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:48.889 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.889 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.889 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.889 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.146 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:49.146 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:49.711 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.711 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:49.711 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.711 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.711 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.711 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.711 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:49.711 04:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:49.969 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:49.969 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.969 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:49.969 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:49.969 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:49.969 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.969 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:49.969 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.969 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.969 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.969 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:49.969 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:49.969 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.227 00:14:50.227 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.227 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.227 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.227 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.227 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.227 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.227 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.227 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.227 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.227 { 00:14:50.227 "cntlid": 55, 00:14:50.227 "qid": 0, 00:14:50.227 "state": "enabled", 00:14:50.227 "thread": "nvmf_tgt_poll_group_000", 00:14:50.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:50.227 "listen_address": { 00:14:50.227 "trtype": "RDMA", 00:14:50.227 "adrfam": "IPv4", 00:14:50.227 "traddr": "192.168.100.8", 00:14:50.227 "trsvcid": "4420" 00:14:50.227 }, 00:14:50.227 "peer_address": { 00:14:50.227 "trtype": "RDMA", 00:14:50.227 "adrfam": "IPv4", 00:14:50.227 "traddr": "192.168.100.8", 00:14:50.227 "trsvcid": "45324" 00:14:50.227 }, 00:14:50.227 "auth": { 00:14:50.227 "state": "completed", 00:14:50.227 "digest": "sha384", 00:14:50.227 "dhgroup": "null" 00:14:50.227 } 00:14:50.227 } 00:14:50.227 ]' 00:14:50.227 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.484 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:50.484 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.484 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:50.484 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.484 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.484 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.484 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.742 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:50.742 04:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:51.306 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.306 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:51.306 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.306 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.307 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.307 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.307 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.307 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:51.307 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:51.564 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:51.564 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.564 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:51.564 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:51.564 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:51.564 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.564 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.564 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.564 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.564 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.564 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.564 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.564 04:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.822 00:14:51.822 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.822 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.822 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.079 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.079 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.079 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.080 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.080 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.080 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.080 { 00:14:52.080 "cntlid": 57, 00:14:52.080 "qid": 0, 00:14:52.080 "state": "enabled", 00:14:52.080 "thread": "nvmf_tgt_poll_group_000", 00:14:52.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:52.080 "listen_address": { 00:14:52.080 "trtype": "RDMA", 00:14:52.080 "adrfam": "IPv4", 00:14:52.080 "traddr": "192.168.100.8", 00:14:52.080 "trsvcid": "4420" 00:14:52.080 }, 00:14:52.080 "peer_address": { 00:14:52.080 "trtype": "RDMA", 00:14:52.080 "adrfam": "IPv4", 00:14:52.080 "traddr": "192.168.100.8", 00:14:52.080 "trsvcid": "40729" 00:14:52.080 }, 00:14:52.080 "auth": { 00:14:52.080 "state": "completed", 00:14:52.080 "digest": "sha384", 00:14:52.080 "dhgroup": "ffdhe2048" 00:14:52.080 } 00:14:52.080 } 00:14:52.080 ]' 00:14:52.080 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.080 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:52.080 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.080 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:52.080 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.080 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.080 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.080 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.337 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:52.337 04:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:52.902 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.902 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:52.902 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.902 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.902 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.902 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.902 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:52.902 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:53.161 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:53.161 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.161 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:53.161 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:53.161 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:53.161 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.161 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.161 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.161 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.161 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.161 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.161 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.161 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.419 00:14:53.419 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.419 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.419 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.677 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.677 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.677 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.677 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.677 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.677 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.677 { 00:14:53.677 "cntlid": 59, 00:14:53.677 "qid": 0, 00:14:53.677 "state": "enabled", 00:14:53.677 "thread": "nvmf_tgt_poll_group_000", 00:14:53.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:53.677 "listen_address": { 00:14:53.677 "trtype": "RDMA", 00:14:53.677 "adrfam": "IPv4", 00:14:53.677 "traddr": "192.168.100.8", 00:14:53.677 "trsvcid": "4420" 00:14:53.677 }, 00:14:53.677 "peer_address": { 00:14:53.677 "trtype": "RDMA", 00:14:53.677 "adrfam": "IPv4", 00:14:53.677 "traddr": "192.168.100.8", 00:14:53.677 "trsvcid": "56899" 00:14:53.677 }, 00:14:53.677 "auth": { 00:14:53.677 "state": "completed", 00:14:53.677 "digest": "sha384", 00:14:53.677 "dhgroup": "ffdhe2048" 00:14:53.677 } 00:14:53.677 } 00:14:53.677 ]' 00:14:53.677 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.677 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:53.677 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.677 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:53.677 04:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.677 04:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.677 04:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.677 04:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.936 04:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:53.936 04:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:14:54.502 04:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.502 04:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:54.502 04:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.502 04:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.761 04:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.761 04:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.761 04:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:54.761 04:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:54.761 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:54.761 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.761 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:54.761 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:54.761 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:54.761 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.761 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.761 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.761 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.761 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.761 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.761 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.761 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.020 00:14:55.020 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.020 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.020 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.278 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.278 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.278 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.278 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.278 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.278 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.278 { 00:14:55.278 "cntlid": 61, 00:14:55.278 "qid": 0, 00:14:55.278 "state": "enabled", 00:14:55.278 "thread": "nvmf_tgt_poll_group_000", 00:14:55.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:55.278 "listen_address": { 00:14:55.278 "trtype": "RDMA", 00:14:55.278 "adrfam": "IPv4", 00:14:55.278 "traddr": "192.168.100.8", 00:14:55.278 "trsvcid": "4420" 00:14:55.278 }, 00:14:55.278 "peer_address": { 00:14:55.278 "trtype": "RDMA", 00:14:55.278 "adrfam": "IPv4", 00:14:55.278 "traddr": "192.168.100.8", 00:14:55.278 "trsvcid": "51065" 00:14:55.278 }, 00:14:55.278 "auth": { 00:14:55.278 "state": "completed", 00:14:55.278 "digest": "sha384", 00:14:55.278 "dhgroup": "ffdhe2048" 00:14:55.278 } 00:14:55.278 } 00:14:55.278 ]' 00:14:55.278 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.278 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.278 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.278 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:55.278 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.278 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.278 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.278 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.537 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:55.537 04:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:14:56.103 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:56.362 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:56.621 00:14:56.621 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.621 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.621 04:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.879 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.879 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.879 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.879 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.879 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.879 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.879 { 00:14:56.879 "cntlid": 63, 00:14:56.879 "qid": 0, 00:14:56.879 "state": "enabled", 00:14:56.879 "thread": "nvmf_tgt_poll_group_000", 00:14:56.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:56.879 "listen_address": { 00:14:56.879 "trtype": "RDMA", 00:14:56.879 "adrfam": "IPv4", 00:14:56.879 "traddr": "192.168.100.8", 00:14:56.879 "trsvcid": "4420" 00:14:56.879 }, 00:14:56.879 "peer_address": { 00:14:56.879 "trtype": "RDMA", 00:14:56.879 "adrfam": "IPv4", 00:14:56.879 "traddr": "192.168.100.8", 00:14:56.879 "trsvcid": "41195" 00:14:56.879 }, 00:14:56.879 "auth": { 00:14:56.879 "state": "completed", 00:14:56.879 "digest": "sha384", 00:14:56.879 "dhgroup": "ffdhe2048" 00:14:56.879 } 00:14:56.879 } 00:14:56.879 ]' 00:14:56.879 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.879 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.879 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.879 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:56.879 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.137 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.137 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.137 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.137 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:57.137 04:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:14:57.705 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.963 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:57.963 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.963 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.963 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.963 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.963 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.963 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:57.963 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:58.222 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:14:58.222 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.222 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:58.222 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:58.222 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:58.222 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.222 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.222 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.222 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.222 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.222 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.222 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.222 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.222 00:14:58.481 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.481 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.481 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.481 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.481 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.481 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.481 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.481 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.481 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.481 { 00:14:58.481 "cntlid": 65, 00:14:58.481 "qid": 0, 00:14:58.481 "state": "enabled", 00:14:58.481 "thread": "nvmf_tgt_poll_group_000", 00:14:58.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:58.481 "listen_address": { 00:14:58.481 "trtype": "RDMA", 00:14:58.481 "adrfam": "IPv4", 00:14:58.481 "traddr": "192.168.100.8", 00:14:58.481 "trsvcid": "4420" 00:14:58.481 }, 00:14:58.481 "peer_address": { 00:14:58.481 "trtype": "RDMA", 00:14:58.481 "adrfam": "IPv4", 00:14:58.481 "traddr": "192.168.100.8", 00:14:58.481 "trsvcid": "42863" 00:14:58.481 }, 00:14:58.481 "auth": { 00:14:58.481 "state": "completed", 00:14:58.481 "digest": "sha384", 00:14:58.481 "dhgroup": "ffdhe3072" 00:14:58.481 } 00:14:58.481 } 00:14:58.481 ]' 00:14:58.481 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.481 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.481 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.739 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:58.739 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.739 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.739 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.739 04:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.739 04:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:58.739 04:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:14:59.673 04:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.673 04:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:59.673 04:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.673 04:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.673 04:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.673 04:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.673 04:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:59.673 04:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:59.673 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:14:59.673 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.673 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:59.673 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:59.673 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:59.673 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.674 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.674 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.674 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.674 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.674 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.674 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.674 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.932 00:14:59.932 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.932 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.932 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.193 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.193 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.193 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.193 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.193 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.193 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.193 { 00:15:00.193 "cntlid": 67, 00:15:00.193 "qid": 0, 00:15:00.193 "state": "enabled", 00:15:00.193 "thread": "nvmf_tgt_poll_group_000", 00:15:00.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:00.193 "listen_address": { 00:15:00.193 "trtype": "RDMA", 00:15:00.193 "adrfam": "IPv4", 00:15:00.193 "traddr": "192.168.100.8", 00:15:00.193 "trsvcid": "4420" 00:15:00.193 }, 00:15:00.193 "peer_address": { 00:15:00.193 "trtype": "RDMA", 00:15:00.193 "adrfam": "IPv4", 00:15:00.193 "traddr": "192.168.100.8", 00:15:00.193 "trsvcid": "46151" 00:15:00.193 }, 00:15:00.193 "auth": { 00:15:00.193 "state": "completed", 00:15:00.193 "digest": "sha384", 00:15:00.193 "dhgroup": "ffdhe3072" 00:15:00.193 } 00:15:00.193 } 00:15:00.193 ]' 00:15:00.193 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.193 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.193 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.193 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:00.193 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.451 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.451 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.451 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.451 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:00.451 04:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:01.017 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.275 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.533 00:15:01.533 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.533 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.533 04:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.791 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.791 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.791 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.792 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.792 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.792 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.792 { 00:15:01.792 "cntlid": 69, 00:15:01.792 "qid": 0, 00:15:01.792 "state": "enabled", 00:15:01.792 "thread": "nvmf_tgt_poll_group_000", 00:15:01.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:01.792 "listen_address": { 00:15:01.792 "trtype": "RDMA", 00:15:01.792 "adrfam": "IPv4", 00:15:01.792 "traddr": "192.168.100.8", 00:15:01.792 "trsvcid": "4420" 00:15:01.792 }, 00:15:01.792 "peer_address": { 00:15:01.792 "trtype": "RDMA", 00:15:01.792 "adrfam": "IPv4", 00:15:01.792 "traddr": "192.168.100.8", 00:15:01.792 "trsvcid": "40857" 00:15:01.792 }, 00:15:01.792 "auth": { 00:15:01.792 "state": "completed", 00:15:01.792 "digest": "sha384", 00:15:01.792 "dhgroup": "ffdhe3072" 00:15:01.792 } 00:15:01.792 } 00:15:01.792 ]' 00:15:01.792 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.792 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.792 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.050 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:02.050 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.050 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.050 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.050 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.050 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:02.050 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:02.617 04:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.875 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:02.875 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.875 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.875 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.875 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.875 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:02.875 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:03.134 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:03.134 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.134 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:03.134 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:03.134 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:03.134 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.134 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:15:03.134 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.134 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.134 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.134 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:03.134 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.134 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.393 00:15:03.393 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.393 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.393 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.393 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.393 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.393 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.393 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.393 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.393 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.393 { 00:15:03.393 "cntlid": 71, 00:15:03.393 "qid": 0, 00:15:03.393 "state": "enabled", 00:15:03.393 "thread": "nvmf_tgt_poll_group_000", 00:15:03.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:03.393 "listen_address": { 00:15:03.393 "trtype": "RDMA", 00:15:03.393 "adrfam": "IPv4", 00:15:03.393 "traddr": "192.168.100.8", 00:15:03.393 "trsvcid": "4420" 00:15:03.393 }, 00:15:03.393 "peer_address": { 00:15:03.393 "trtype": "RDMA", 00:15:03.393 "adrfam": "IPv4", 00:15:03.393 "traddr": "192.168.100.8", 00:15:03.393 "trsvcid": "55804" 00:15:03.393 }, 00:15:03.393 "auth": { 00:15:03.393 "state": "completed", 00:15:03.393 "digest": "sha384", 00:15:03.393 "dhgroup": "ffdhe3072" 00:15:03.393 } 00:15:03.393 } 00:15:03.393 ]' 00:15:03.393 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.393 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.393 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.651 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:03.651 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.651 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.651 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.651 04:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.909 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:03.909 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:04.476 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.476 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:04.476 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.476 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.476 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.476 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:04.476 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.476 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:04.476 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:04.735 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:04.735 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.735 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:04.735 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:04.735 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:04.735 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.735 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.735 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.735 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.735 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.735 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.735 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.735 04:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.993 00:15:04.993 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.993 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.993 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.251 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.251 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.251 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.252 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.252 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.252 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.252 { 00:15:05.252 "cntlid": 73, 00:15:05.252 "qid": 0, 00:15:05.252 "state": "enabled", 00:15:05.252 "thread": "nvmf_tgt_poll_group_000", 00:15:05.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:05.252 "listen_address": { 00:15:05.252 "trtype": "RDMA", 00:15:05.252 "adrfam": "IPv4", 00:15:05.252 "traddr": "192.168.100.8", 00:15:05.252 "trsvcid": "4420" 00:15:05.252 }, 00:15:05.252 "peer_address": { 00:15:05.252 "trtype": "RDMA", 00:15:05.252 "adrfam": "IPv4", 00:15:05.252 "traddr": "192.168.100.8", 00:15:05.252 "trsvcid": "37389" 00:15:05.252 }, 00:15:05.252 "auth": { 00:15:05.252 "state": "completed", 00:15:05.252 "digest": "sha384", 00:15:05.252 "dhgroup": "ffdhe4096" 00:15:05.252 } 00:15:05.252 } 00:15:05.252 ]' 00:15:05.252 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.252 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.252 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.252 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:05.252 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.252 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.252 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.252 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.510 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:05.510 04:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:06.076 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.076 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:06.076 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.076 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.076 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.076 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.076 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:06.076 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:06.334 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:06.334 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.334 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:06.334 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:06.334 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:06.334 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.334 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.334 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.334 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.334 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.334 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.334 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.334 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.593 00:15:06.593 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.593 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.593 04:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.851 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.851 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.851 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.851 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.851 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.851 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.851 { 00:15:06.851 "cntlid": 75, 00:15:06.851 "qid": 0, 00:15:06.851 "state": "enabled", 00:15:06.851 "thread": "nvmf_tgt_poll_group_000", 00:15:06.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:06.851 "listen_address": { 00:15:06.851 "trtype": "RDMA", 00:15:06.851 "adrfam": "IPv4", 00:15:06.851 "traddr": "192.168.100.8", 00:15:06.851 "trsvcid": "4420" 00:15:06.851 }, 00:15:06.851 "peer_address": { 00:15:06.851 "trtype": "RDMA", 00:15:06.851 "adrfam": "IPv4", 00:15:06.851 "traddr": "192.168.100.8", 00:15:06.851 "trsvcid": "38488" 00:15:06.851 }, 00:15:06.851 "auth": { 00:15:06.851 "state": "completed", 00:15:06.851 "digest": "sha384", 00:15:06.851 "dhgroup": "ffdhe4096" 00:15:06.851 } 00:15:06.851 } 00:15:06.851 ]' 00:15:06.851 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.851 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.852 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.852 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:06.852 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.852 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.852 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.852 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.110 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:07.110 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:07.676 04:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.935 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.195 00:15:08.195 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.195 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.195 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.454 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.454 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.454 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.454 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.454 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.454 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.454 { 00:15:08.454 "cntlid": 77, 00:15:08.454 "qid": 0, 00:15:08.454 "state": "enabled", 00:15:08.454 "thread": "nvmf_tgt_poll_group_000", 00:15:08.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:08.454 "listen_address": { 00:15:08.454 "trtype": "RDMA", 00:15:08.454 "adrfam": "IPv4", 00:15:08.454 "traddr": "192.168.100.8", 00:15:08.454 "trsvcid": "4420" 00:15:08.454 }, 00:15:08.454 "peer_address": { 00:15:08.454 "trtype": "RDMA", 00:15:08.454 "adrfam": "IPv4", 00:15:08.454 "traddr": "192.168.100.8", 00:15:08.454 "trsvcid": "40327" 00:15:08.454 }, 00:15:08.454 "auth": { 00:15:08.454 "state": "completed", 00:15:08.454 "digest": "sha384", 00:15:08.454 "dhgroup": "ffdhe4096" 00:15:08.454 } 00:15:08.454 } 00:15:08.454 ]' 00:15:08.454 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.454 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.454 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.713 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:08.713 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.713 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.713 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.713 04:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.713 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:08.713 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:09.307 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.604 04:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.872 00:15:09.872 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.872 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.872 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.149 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.149 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.150 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.150 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.150 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.150 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.150 { 00:15:10.150 "cntlid": 79, 00:15:10.150 "qid": 0, 00:15:10.150 "state": "enabled", 00:15:10.150 "thread": "nvmf_tgt_poll_group_000", 00:15:10.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:10.150 "listen_address": { 00:15:10.150 "trtype": "RDMA", 00:15:10.150 "adrfam": "IPv4", 00:15:10.150 "traddr": "192.168.100.8", 00:15:10.150 "trsvcid": "4420" 00:15:10.150 }, 00:15:10.150 "peer_address": { 00:15:10.150 "trtype": "RDMA", 00:15:10.150 "adrfam": "IPv4", 00:15:10.150 "traddr": "192.168.100.8", 00:15:10.150 "trsvcid": "57794" 00:15:10.150 }, 00:15:10.150 "auth": { 00:15:10.150 "state": "completed", 00:15:10.150 "digest": "sha384", 00:15:10.150 "dhgroup": "ffdhe4096" 00:15:10.150 } 00:15:10.150 } 00:15:10.150 ]' 00:15:10.150 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.150 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.150 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.150 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:10.150 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.420 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.420 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.420 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.420 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:10.421 04:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:10.986 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.243 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:11.243 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.243 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.243 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.243 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:11.243 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.243 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:11.243 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:11.500 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:11.500 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.500 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:11.500 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:11.500 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:11.500 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.500 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.500 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.500 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.501 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.501 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.501 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.501 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.758 00:15:11.758 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.758 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.758 04:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.016 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.016 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.016 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.016 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.016 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.016 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.016 { 00:15:12.016 "cntlid": 81, 00:15:12.016 "qid": 0, 00:15:12.016 "state": "enabled", 00:15:12.016 "thread": "nvmf_tgt_poll_group_000", 00:15:12.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:12.016 "listen_address": { 00:15:12.016 "trtype": "RDMA", 00:15:12.016 "adrfam": "IPv4", 00:15:12.016 "traddr": "192.168.100.8", 00:15:12.016 "trsvcid": "4420" 00:15:12.016 }, 00:15:12.016 "peer_address": { 00:15:12.016 "trtype": "RDMA", 00:15:12.016 "adrfam": "IPv4", 00:15:12.016 "traddr": "192.168.100.8", 00:15:12.016 "trsvcid": "34005" 00:15:12.016 }, 00:15:12.016 "auth": { 00:15:12.016 "state": "completed", 00:15:12.016 "digest": "sha384", 00:15:12.016 "dhgroup": "ffdhe6144" 00:15:12.016 } 00:15:12.016 } 00:15:12.016 ]' 00:15:12.016 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.016 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.016 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.016 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:12.016 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.016 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.016 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.016 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.273 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:12.273 04:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:12.838 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.838 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:12.838 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.838 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.838 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.838 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.838 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:12.838 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:13.095 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:13.095 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.095 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:13.095 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:13.095 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:13.095 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.095 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.095 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.095 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.095 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.095 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.095 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.095 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.353 00:15:13.353 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.353 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.353 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.610 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.610 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.610 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.610 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.610 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.610 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.610 { 00:15:13.610 "cntlid": 83, 00:15:13.610 "qid": 0, 00:15:13.610 "state": "enabled", 00:15:13.610 "thread": "nvmf_tgt_poll_group_000", 00:15:13.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:13.610 "listen_address": { 00:15:13.610 "trtype": "RDMA", 00:15:13.610 "adrfam": "IPv4", 00:15:13.610 "traddr": "192.168.100.8", 00:15:13.610 "trsvcid": "4420" 00:15:13.610 }, 00:15:13.610 "peer_address": { 00:15:13.610 "trtype": "RDMA", 00:15:13.610 "adrfam": "IPv4", 00:15:13.610 "traddr": "192.168.100.8", 00:15:13.610 "trsvcid": "51565" 00:15:13.610 }, 00:15:13.610 "auth": { 00:15:13.610 "state": "completed", 00:15:13.610 "digest": "sha384", 00:15:13.610 "dhgroup": "ffdhe6144" 00:15:13.610 } 00:15:13.610 } 00:15:13.610 ]' 00:15:13.610 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.610 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.610 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.610 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:13.610 04:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.867 04:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.867 04:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.867 04:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.867 04:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:13.867 04:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:14.431 04:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.688 04:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:14.688 04:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.688 04:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.688 04:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.688 04:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.688 04:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:14.688 04:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:14.946 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:14.946 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.946 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:14.946 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:14.946 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:14.946 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.946 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.946 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.946 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.946 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.946 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.946 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.946 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.203 00:15:15.203 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.203 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.203 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.460 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.461 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.461 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.461 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.461 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.461 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.461 { 00:15:15.461 "cntlid": 85, 00:15:15.461 "qid": 0, 00:15:15.461 "state": "enabled", 00:15:15.461 "thread": "nvmf_tgt_poll_group_000", 00:15:15.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:15.461 "listen_address": { 00:15:15.461 "trtype": "RDMA", 00:15:15.461 "adrfam": "IPv4", 00:15:15.461 "traddr": "192.168.100.8", 00:15:15.461 "trsvcid": "4420" 00:15:15.461 }, 00:15:15.461 "peer_address": { 00:15:15.461 "trtype": "RDMA", 00:15:15.461 "adrfam": "IPv4", 00:15:15.461 "traddr": "192.168.100.8", 00:15:15.461 "trsvcid": "32907" 00:15:15.461 }, 00:15:15.461 "auth": { 00:15:15.461 "state": "completed", 00:15:15.461 "digest": "sha384", 00:15:15.461 "dhgroup": "ffdhe6144" 00:15:15.461 } 00:15:15.461 } 00:15:15.461 ]' 00:15:15.461 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.461 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.461 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.461 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:15.461 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.461 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.461 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.461 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.718 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:15.718 04:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:16.280 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.280 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:16.280 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.280 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:16.536 04:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.099 00:15:17.099 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.099 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.099 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.099 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.099 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.099 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.099 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.099 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.099 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.099 { 00:15:17.099 "cntlid": 87, 00:15:17.099 "qid": 0, 00:15:17.099 "state": "enabled", 00:15:17.099 "thread": "nvmf_tgt_poll_group_000", 00:15:17.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:17.099 "listen_address": { 00:15:17.099 "trtype": "RDMA", 00:15:17.099 "adrfam": "IPv4", 00:15:17.099 "traddr": "192.168.100.8", 00:15:17.099 "trsvcid": "4420" 00:15:17.099 }, 00:15:17.099 "peer_address": { 00:15:17.099 "trtype": "RDMA", 00:15:17.099 "adrfam": "IPv4", 00:15:17.099 "traddr": "192.168.100.8", 00:15:17.099 "trsvcid": "47094" 00:15:17.099 }, 00:15:17.099 "auth": { 00:15:17.099 "state": "completed", 00:15:17.099 "digest": "sha384", 00:15:17.099 "dhgroup": "ffdhe6144" 00:15:17.099 } 00:15:17.099 } 00:15:17.099 ]' 00:15:17.099 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.099 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.099 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.099 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:17.099 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.356 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.356 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.356 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.356 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:17.356 04:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:17.920 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.177 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:18.177 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.177 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.177 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.177 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.177 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.177 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:18.177 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:18.434 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:18.434 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.434 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:18.434 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:18.434 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:18.434 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.434 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.434 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.434 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.434 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.434 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.434 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.434 04:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.691 00:15:18.948 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.948 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.948 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.948 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.948 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.948 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.948 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.948 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.948 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.948 { 00:15:18.948 "cntlid": 89, 00:15:18.948 "qid": 0, 00:15:18.948 "state": "enabled", 00:15:18.948 "thread": "nvmf_tgt_poll_group_000", 00:15:18.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:18.948 "listen_address": { 00:15:18.948 "trtype": "RDMA", 00:15:18.948 "adrfam": "IPv4", 00:15:18.948 "traddr": "192.168.100.8", 00:15:18.948 "trsvcid": "4420" 00:15:18.948 }, 00:15:18.948 "peer_address": { 00:15:18.948 "trtype": "RDMA", 00:15:18.948 "adrfam": "IPv4", 00:15:18.948 "traddr": "192.168.100.8", 00:15:18.948 "trsvcid": "48176" 00:15:18.948 }, 00:15:18.948 "auth": { 00:15:18.948 "state": "completed", 00:15:18.948 "digest": "sha384", 00:15:18.948 "dhgroup": "ffdhe8192" 00:15:18.948 } 00:15:18.948 } 00:15:18.948 ]' 00:15:18.948 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.948 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.948 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.205 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:19.205 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.205 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.205 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.205 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.461 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:19.461 04:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:20.024 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.024 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:20.024 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.024 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.024 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.025 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.025 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:20.025 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:20.282 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:20.282 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.282 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:20.282 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:20.282 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:20.282 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.282 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.282 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.282 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.282 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.282 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.282 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.282 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.538 00:15:20.795 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.795 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.795 04:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.795 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.795 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.795 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.795 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.795 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.795 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.795 { 00:15:20.795 "cntlid": 91, 00:15:20.795 "qid": 0, 00:15:20.795 "state": "enabled", 00:15:20.795 "thread": "nvmf_tgt_poll_group_000", 00:15:20.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:20.795 "listen_address": { 00:15:20.795 "trtype": "RDMA", 00:15:20.795 "adrfam": "IPv4", 00:15:20.795 "traddr": "192.168.100.8", 00:15:20.795 "trsvcid": "4420" 00:15:20.795 }, 00:15:20.795 "peer_address": { 00:15:20.795 "trtype": "RDMA", 00:15:20.795 "adrfam": "IPv4", 00:15:20.795 "traddr": "192.168.100.8", 00:15:20.795 "trsvcid": "52026" 00:15:20.795 }, 00:15:20.795 "auth": { 00:15:20.795 "state": "completed", 00:15:20.795 "digest": "sha384", 00:15:20.795 "dhgroup": "ffdhe8192" 00:15:20.795 } 00:15:20.795 } 00:15:20.795 ]' 00:15:20.795 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.795 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.795 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.052 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:21.052 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.052 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.052 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.052 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.052 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:21.052 04:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.981 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.545 00:15:22.545 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.545 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.545 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.801 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.801 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.801 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.801 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.801 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.801 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.801 { 00:15:22.801 "cntlid": 93, 00:15:22.801 "qid": 0, 00:15:22.801 "state": "enabled", 00:15:22.801 "thread": "nvmf_tgt_poll_group_000", 00:15:22.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:22.801 "listen_address": { 00:15:22.801 "trtype": "RDMA", 00:15:22.801 "adrfam": "IPv4", 00:15:22.801 "traddr": "192.168.100.8", 00:15:22.801 "trsvcid": "4420" 00:15:22.801 }, 00:15:22.801 "peer_address": { 00:15:22.801 "trtype": "RDMA", 00:15:22.801 "adrfam": "IPv4", 00:15:22.801 "traddr": "192.168.100.8", 00:15:22.801 "trsvcid": "45626" 00:15:22.801 }, 00:15:22.801 "auth": { 00:15:22.801 "state": "completed", 00:15:22.801 "digest": "sha384", 00:15:22.801 "dhgroup": "ffdhe8192" 00:15:22.801 } 00:15:22.801 } 00:15:22.801 ]' 00:15:22.801 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.801 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.801 04:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.801 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:22.801 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.801 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.801 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.801 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.058 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:23.058 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:23.622 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.622 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:23.622 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.622 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.622 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.622 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.622 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:23.622 04:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:23.878 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:23.878 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.878 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:23.878 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:23.878 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:23.878 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.878 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:15:23.878 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.878 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.878 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.878 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:23.878 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.878 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.441 00:15:24.441 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.441 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.441 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.698 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.698 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.698 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.698 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.698 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.698 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.698 { 00:15:24.698 "cntlid": 95, 00:15:24.698 "qid": 0, 00:15:24.698 "state": "enabled", 00:15:24.698 "thread": "nvmf_tgt_poll_group_000", 00:15:24.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:24.698 "listen_address": { 00:15:24.698 "trtype": "RDMA", 00:15:24.698 "adrfam": "IPv4", 00:15:24.698 "traddr": "192.168.100.8", 00:15:24.698 "trsvcid": "4420" 00:15:24.698 }, 00:15:24.698 "peer_address": { 00:15:24.698 "trtype": "RDMA", 00:15:24.698 "adrfam": "IPv4", 00:15:24.698 "traddr": "192.168.100.8", 00:15:24.698 "trsvcid": "58648" 00:15:24.698 }, 00:15:24.698 "auth": { 00:15:24.698 "state": "completed", 00:15:24.698 "digest": "sha384", 00:15:24.698 "dhgroup": "ffdhe8192" 00:15:24.698 } 00:15:24.698 } 00:15:24.698 ]' 00:15:24.698 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.698 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.698 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.699 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.699 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.699 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.699 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.699 04:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.955 04:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:24.955 04:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:25.518 04:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.518 04:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:25.519 04:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.519 04:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.519 04:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.519 04:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:25.519 04:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.519 04:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.519 04:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:25.519 04:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:25.777 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:25.777 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.777 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:25.777 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:25.777 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:25.777 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.777 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.777 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.777 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.777 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.777 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.777 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.777 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.034 00:15:26.035 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.035 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.035 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.292 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.292 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.292 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.292 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.292 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.292 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.292 { 00:15:26.292 "cntlid": 97, 00:15:26.292 "qid": 0, 00:15:26.292 "state": "enabled", 00:15:26.292 "thread": "nvmf_tgt_poll_group_000", 00:15:26.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:26.292 "listen_address": { 00:15:26.292 "trtype": "RDMA", 00:15:26.292 "adrfam": "IPv4", 00:15:26.292 "traddr": "192.168.100.8", 00:15:26.292 "trsvcid": "4420" 00:15:26.292 }, 00:15:26.292 "peer_address": { 00:15:26.292 "trtype": "RDMA", 00:15:26.292 "adrfam": "IPv4", 00:15:26.292 "traddr": "192.168.100.8", 00:15:26.292 "trsvcid": "42297" 00:15:26.292 }, 00:15:26.292 "auth": { 00:15:26.292 "state": "completed", 00:15:26.292 "digest": "sha512", 00:15:26.292 "dhgroup": "null" 00:15:26.292 } 00:15:26.292 } 00:15:26.292 ]' 00:15:26.292 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.292 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:26.292 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.292 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:26.292 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.292 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.292 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.292 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.549 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:26.549 04:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:27.112 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.112 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:27.112 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.112 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.112 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.112 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.112 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:27.112 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:27.369 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:27.369 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.369 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:27.369 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:27.369 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:27.369 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.369 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.369 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.369 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.369 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.369 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.369 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.369 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.625 00:15:27.625 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.625 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.625 04:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.882 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.882 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.882 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.882 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.882 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.882 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.882 { 00:15:27.882 "cntlid": 99, 00:15:27.882 "qid": 0, 00:15:27.882 "state": "enabled", 00:15:27.882 "thread": "nvmf_tgt_poll_group_000", 00:15:27.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:27.882 "listen_address": { 00:15:27.882 "trtype": "RDMA", 00:15:27.882 "adrfam": "IPv4", 00:15:27.882 "traddr": "192.168.100.8", 00:15:27.882 "trsvcid": "4420" 00:15:27.882 }, 00:15:27.882 "peer_address": { 00:15:27.882 "trtype": "RDMA", 00:15:27.882 "adrfam": "IPv4", 00:15:27.882 "traddr": "192.168.100.8", 00:15:27.882 "trsvcid": "60225" 00:15:27.882 }, 00:15:27.882 "auth": { 00:15:27.882 "state": "completed", 00:15:27.882 "digest": "sha512", 00:15:27.882 "dhgroup": "null" 00:15:27.882 } 00:15:27.882 } 00:15:27.882 ]' 00:15:27.882 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.882 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:27.882 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.882 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:27.882 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.882 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.882 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.882 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.139 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:28.139 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:28.703 04:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.703 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:28.703 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.703 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.703 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.703 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.703 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:28.960 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:28.960 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:28.960 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.960 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:28.960 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:28.960 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:28.960 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.960 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.960 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.960 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.960 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.960 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.960 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.960 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.217 00:15:29.217 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.217 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.217 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.474 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.474 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.474 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.474 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.474 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.474 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.474 { 00:15:29.474 "cntlid": 101, 00:15:29.474 "qid": 0, 00:15:29.474 "state": "enabled", 00:15:29.474 "thread": "nvmf_tgt_poll_group_000", 00:15:29.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:29.474 "listen_address": { 00:15:29.474 "trtype": "RDMA", 00:15:29.474 "adrfam": "IPv4", 00:15:29.474 "traddr": "192.168.100.8", 00:15:29.474 "trsvcid": "4420" 00:15:29.474 }, 00:15:29.474 "peer_address": { 00:15:29.474 "trtype": "RDMA", 00:15:29.474 "adrfam": "IPv4", 00:15:29.474 "traddr": "192.168.100.8", 00:15:29.474 "trsvcid": "45012" 00:15:29.474 }, 00:15:29.474 "auth": { 00:15:29.474 "state": "completed", 00:15:29.474 "digest": "sha512", 00:15:29.474 "dhgroup": "null" 00:15:29.474 } 00:15:29.474 } 00:15:29.474 ]' 00:15:29.474 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.474 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:29.474 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.474 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:29.474 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.474 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.474 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.474 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.730 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:29.731 04:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:30.294 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.551 04:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.810 00:15:30.810 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.810 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.810 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.068 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.068 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.068 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.068 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.068 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.068 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.068 { 00:15:31.068 "cntlid": 103, 00:15:31.068 "qid": 0, 00:15:31.068 "state": "enabled", 00:15:31.069 "thread": "nvmf_tgt_poll_group_000", 00:15:31.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:31.069 "listen_address": { 00:15:31.069 "trtype": "RDMA", 00:15:31.069 "adrfam": "IPv4", 00:15:31.069 "traddr": "192.168.100.8", 00:15:31.069 "trsvcid": "4420" 00:15:31.069 }, 00:15:31.069 "peer_address": { 00:15:31.069 "trtype": "RDMA", 00:15:31.069 "adrfam": "IPv4", 00:15:31.069 "traddr": "192.168.100.8", 00:15:31.069 "trsvcid": "45437" 00:15:31.069 }, 00:15:31.069 "auth": { 00:15:31.069 "state": "completed", 00:15:31.069 "digest": "sha512", 00:15:31.069 "dhgroup": "null" 00:15:31.069 } 00:15:31.069 } 00:15:31.069 ]' 00:15:31.069 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.069 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:31.069 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.069 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:31.069 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.069 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.069 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.069 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.325 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:31.325 04:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:31.888 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.145 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.402 00:15:32.402 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.402 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.402 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.659 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.659 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.659 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.659 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.659 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.659 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.659 { 00:15:32.659 "cntlid": 105, 00:15:32.659 "qid": 0, 00:15:32.659 "state": "enabled", 00:15:32.659 "thread": "nvmf_tgt_poll_group_000", 00:15:32.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:32.659 "listen_address": { 00:15:32.659 "trtype": "RDMA", 00:15:32.659 "adrfam": "IPv4", 00:15:32.659 "traddr": "192.168.100.8", 00:15:32.659 "trsvcid": "4420" 00:15:32.659 }, 00:15:32.659 "peer_address": { 00:15:32.659 "trtype": "RDMA", 00:15:32.659 "adrfam": "IPv4", 00:15:32.659 "traddr": "192.168.100.8", 00:15:32.659 "trsvcid": "37982" 00:15:32.659 }, 00:15:32.659 "auth": { 00:15:32.659 "state": "completed", 00:15:32.659 "digest": "sha512", 00:15:32.659 "dhgroup": "ffdhe2048" 00:15:32.659 } 00:15:32.659 } 00:15:32.659 ]' 00:15:32.659 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.659 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:32.659 04:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.659 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:32.659 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.916 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.916 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.916 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.916 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:32.916 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:33.497 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.788 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:33.788 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.788 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.788 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.788 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.788 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:33.788 04:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:33.788 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:33.788 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.788 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:33.788 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:33.788 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:33.788 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.788 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.788 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.788 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.788 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.788 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.788 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.788 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.065 00:15:34.065 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.065 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.065 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.323 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.323 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.323 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.323 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.323 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.323 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.323 { 00:15:34.323 "cntlid": 107, 00:15:34.323 "qid": 0, 00:15:34.323 "state": "enabled", 00:15:34.323 "thread": "nvmf_tgt_poll_group_000", 00:15:34.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:34.323 "listen_address": { 00:15:34.323 "trtype": "RDMA", 00:15:34.323 "adrfam": "IPv4", 00:15:34.323 "traddr": "192.168.100.8", 00:15:34.323 "trsvcid": "4420" 00:15:34.323 }, 00:15:34.323 "peer_address": { 00:15:34.323 "trtype": "RDMA", 00:15:34.323 "adrfam": "IPv4", 00:15:34.323 "traddr": "192.168.100.8", 00:15:34.323 "trsvcid": "54941" 00:15:34.323 }, 00:15:34.323 "auth": { 00:15:34.323 "state": "completed", 00:15:34.323 "digest": "sha512", 00:15:34.323 "dhgroup": "ffdhe2048" 00:15:34.323 } 00:15:34.323 } 00:15:34.323 ]' 00:15:34.323 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.323 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:34.323 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.323 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:34.323 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.323 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.323 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.323 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.581 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:34.581 04:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:35.147 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.405 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.663 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.663 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.663 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.663 04:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.663 00:15:35.663 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.663 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.663 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.921 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.921 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.921 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.921 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.921 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.921 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.921 { 00:15:35.921 "cntlid": 109, 00:15:35.921 "qid": 0, 00:15:35.921 "state": "enabled", 00:15:35.921 "thread": "nvmf_tgt_poll_group_000", 00:15:35.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:35.921 "listen_address": { 00:15:35.921 "trtype": "RDMA", 00:15:35.921 "adrfam": "IPv4", 00:15:35.921 "traddr": "192.168.100.8", 00:15:35.921 "trsvcid": "4420" 00:15:35.921 }, 00:15:35.921 "peer_address": { 00:15:35.921 "trtype": "RDMA", 00:15:35.921 "adrfam": "IPv4", 00:15:35.921 "traddr": "192.168.100.8", 00:15:35.921 "trsvcid": "37930" 00:15:35.921 }, 00:15:35.921 "auth": { 00:15:35.921 "state": "completed", 00:15:35.921 "digest": "sha512", 00:15:35.921 "dhgroup": "ffdhe2048" 00:15:35.921 } 00:15:35.921 } 00:15:35.921 ]' 00:15:35.921 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.921 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:35.921 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.921 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:35.921 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.179 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.179 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.179 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.179 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:36.179 04:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:36.745 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.002 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:37.002 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.002 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.002 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.002 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.002 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:37.002 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:37.261 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:15:37.261 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.261 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:37.261 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:37.261 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:37.261 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.261 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:15:37.261 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.261 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.261 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.261 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:37.261 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.261 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.519 00:15:37.519 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.519 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.519 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.519 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.519 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.519 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.519 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.519 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.519 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.519 { 00:15:37.519 "cntlid": 111, 00:15:37.519 "qid": 0, 00:15:37.519 "state": "enabled", 00:15:37.519 "thread": "nvmf_tgt_poll_group_000", 00:15:37.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:37.519 "listen_address": { 00:15:37.519 "trtype": "RDMA", 00:15:37.519 "adrfam": "IPv4", 00:15:37.519 "traddr": "192.168.100.8", 00:15:37.519 "trsvcid": "4420" 00:15:37.519 }, 00:15:37.519 "peer_address": { 00:15:37.519 "trtype": "RDMA", 00:15:37.519 "adrfam": "IPv4", 00:15:37.519 "traddr": "192.168.100.8", 00:15:37.519 "trsvcid": "49011" 00:15:37.519 }, 00:15:37.519 "auth": { 00:15:37.519 "state": "completed", 00:15:37.519 "digest": "sha512", 00:15:37.519 "dhgroup": "ffdhe2048" 00:15:37.519 } 00:15:37.519 } 00:15:37.519 ]' 00:15:37.519 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.519 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:37.519 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.778 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.778 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.778 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.778 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.778 04:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.778 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:37.778 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:38.344 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.602 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:38.602 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.602 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.602 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.602 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.602 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.602 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:38.602 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:38.860 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:15:38.860 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.860 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:38.860 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:38.860 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:38.860 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.860 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.860 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.860 04:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.860 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.860 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.861 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.861 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.119 00:15:39.119 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.119 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.119 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.119 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.119 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.119 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.119 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.119 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.119 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.119 { 00:15:39.119 "cntlid": 113, 00:15:39.119 "qid": 0, 00:15:39.119 "state": "enabled", 00:15:39.119 "thread": "nvmf_tgt_poll_group_000", 00:15:39.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:39.119 "listen_address": { 00:15:39.119 "trtype": "RDMA", 00:15:39.119 "adrfam": "IPv4", 00:15:39.119 "traddr": "192.168.100.8", 00:15:39.119 "trsvcid": "4420" 00:15:39.119 }, 00:15:39.119 "peer_address": { 00:15:39.119 "trtype": "RDMA", 00:15:39.119 "adrfam": "IPv4", 00:15:39.119 "traddr": "192.168.100.8", 00:15:39.119 "trsvcid": "39468" 00:15:39.119 }, 00:15:39.119 "auth": { 00:15:39.119 "state": "completed", 00:15:39.119 "digest": "sha512", 00:15:39.119 "dhgroup": "ffdhe3072" 00:15:39.119 } 00:15:39.119 } 00:15:39.119 ]' 00:15:39.119 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.119 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.119 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.376 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:39.376 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.376 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.376 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.376 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.376 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:39.376 04:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.315 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.573 00:15:40.574 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.574 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.574 04:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.836 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.836 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.836 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.836 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.836 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.836 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.836 { 00:15:40.836 "cntlid": 115, 00:15:40.836 "qid": 0, 00:15:40.836 "state": "enabled", 00:15:40.836 "thread": "nvmf_tgt_poll_group_000", 00:15:40.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:40.836 "listen_address": { 00:15:40.836 "trtype": "RDMA", 00:15:40.836 "adrfam": "IPv4", 00:15:40.836 "traddr": "192.168.100.8", 00:15:40.836 "trsvcid": "4420" 00:15:40.836 }, 00:15:40.836 "peer_address": { 00:15:40.836 "trtype": "RDMA", 00:15:40.836 "adrfam": "IPv4", 00:15:40.836 "traddr": "192.168.100.8", 00:15:40.836 "trsvcid": "56136" 00:15:40.836 }, 00:15:40.836 "auth": { 00:15:40.836 "state": "completed", 00:15:40.836 "digest": "sha512", 00:15:40.836 "dhgroup": "ffdhe3072" 00:15:40.836 } 00:15:40.836 } 00:15:40.836 ]' 00:15:40.836 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.836 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:40.836 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.836 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:40.836 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.097 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.097 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.097 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.097 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:41.097 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:41.670 04:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.928 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.186 00:15:42.186 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.186 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.186 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.444 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.444 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.444 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.444 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.444 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.444 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.444 { 00:15:42.444 "cntlid": 117, 00:15:42.444 "qid": 0, 00:15:42.444 "state": "enabled", 00:15:42.444 "thread": "nvmf_tgt_poll_group_000", 00:15:42.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:42.444 "listen_address": { 00:15:42.444 "trtype": "RDMA", 00:15:42.444 "adrfam": "IPv4", 00:15:42.444 "traddr": "192.168.100.8", 00:15:42.444 "trsvcid": "4420" 00:15:42.444 }, 00:15:42.444 "peer_address": { 00:15:42.444 "trtype": "RDMA", 00:15:42.444 "adrfam": "IPv4", 00:15:42.444 "traddr": "192.168.100.8", 00:15:42.444 "trsvcid": "36851" 00:15:42.444 }, 00:15:42.444 "auth": { 00:15:42.444 "state": "completed", 00:15:42.444 "digest": "sha512", 00:15:42.444 "dhgroup": "ffdhe3072" 00:15:42.444 } 00:15:42.444 } 00:15:42.444 ]' 00:15:42.444 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.444 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:42.444 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.444 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:42.444 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.444 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.444 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.444 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.702 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:42.702 04:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:43.267 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.526 04:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.783 00:15:43.783 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.783 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.783 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.040 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.040 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.041 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.041 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.041 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.041 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.041 { 00:15:44.041 "cntlid": 119, 00:15:44.041 "qid": 0, 00:15:44.041 "state": "enabled", 00:15:44.041 "thread": "nvmf_tgt_poll_group_000", 00:15:44.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:44.041 "listen_address": { 00:15:44.041 "trtype": "RDMA", 00:15:44.041 "adrfam": "IPv4", 00:15:44.041 "traddr": "192.168.100.8", 00:15:44.041 "trsvcid": "4420" 00:15:44.041 }, 00:15:44.041 "peer_address": { 00:15:44.041 "trtype": "RDMA", 00:15:44.041 "adrfam": "IPv4", 00:15:44.041 "traddr": "192.168.100.8", 00:15:44.041 "trsvcid": "48409" 00:15:44.041 }, 00:15:44.041 "auth": { 00:15:44.041 "state": "completed", 00:15:44.041 "digest": "sha512", 00:15:44.041 "dhgroup": "ffdhe3072" 00:15:44.041 } 00:15:44.041 } 00:15:44.041 ]' 00:15:44.041 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.041 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:44.041 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.041 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:44.041 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.298 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.298 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.298 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.298 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:44.298 04:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:44.861 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.118 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.375 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.375 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.375 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.376 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.376 00:15:45.632 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.632 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.632 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.632 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.633 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.633 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.633 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.633 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.633 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.633 { 00:15:45.633 "cntlid": 121, 00:15:45.633 "qid": 0, 00:15:45.633 "state": "enabled", 00:15:45.633 "thread": "nvmf_tgt_poll_group_000", 00:15:45.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:45.633 "listen_address": { 00:15:45.633 "trtype": "RDMA", 00:15:45.633 "adrfam": "IPv4", 00:15:45.633 "traddr": "192.168.100.8", 00:15:45.633 "trsvcid": "4420" 00:15:45.633 }, 00:15:45.633 "peer_address": { 00:15:45.633 "trtype": "RDMA", 00:15:45.633 "adrfam": "IPv4", 00:15:45.633 "traddr": "192.168.100.8", 00:15:45.633 "trsvcid": "48700" 00:15:45.633 }, 00:15:45.633 "auth": { 00:15:45.633 "state": "completed", 00:15:45.633 "digest": "sha512", 00:15:45.633 "dhgroup": "ffdhe4096" 00:15:45.633 } 00:15:45.633 } 00:15:45.633 ]' 00:15:45.633 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.633 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.633 04:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.890 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:45.890 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.890 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.890 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.890 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.890 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:45.890 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:46.820 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.820 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:46.820 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.820 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.820 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.820 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.820 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:46.820 04:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:46.820 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:15:46.820 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.820 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:46.820 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:46.820 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:46.820 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.820 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.820 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.820 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.820 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.820 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.820 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.820 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.077 00:15:47.077 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.077 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.077 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.334 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.334 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.334 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.334 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.334 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.334 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.334 { 00:15:47.334 "cntlid": 123, 00:15:47.334 "qid": 0, 00:15:47.334 "state": "enabled", 00:15:47.334 "thread": "nvmf_tgt_poll_group_000", 00:15:47.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:47.334 "listen_address": { 00:15:47.334 "trtype": "RDMA", 00:15:47.334 "adrfam": "IPv4", 00:15:47.334 "traddr": "192.168.100.8", 00:15:47.334 "trsvcid": "4420" 00:15:47.334 }, 00:15:47.334 "peer_address": { 00:15:47.334 "trtype": "RDMA", 00:15:47.334 "adrfam": "IPv4", 00:15:47.334 "traddr": "192.168.100.8", 00:15:47.334 "trsvcid": "49443" 00:15:47.334 }, 00:15:47.334 "auth": { 00:15:47.334 "state": "completed", 00:15:47.334 "digest": "sha512", 00:15:47.334 "dhgroup": "ffdhe4096" 00:15:47.334 } 00:15:47.334 } 00:15:47.334 ]' 00:15:47.334 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.334 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.334 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.334 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:47.335 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.592 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.592 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.592 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.592 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:47.592 04:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:48.155 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.412 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:48.412 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.412 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.412 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.412 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.412 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:48.412 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:48.669 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:48.669 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.669 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:48.669 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:48.669 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:48.669 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.669 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.669 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.669 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.669 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.669 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.669 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.669 04:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.926 00:15:48.926 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.926 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.926 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.926 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.926 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.926 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.926 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.926 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.926 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.926 { 00:15:48.926 "cntlid": 125, 00:15:48.926 "qid": 0, 00:15:48.926 "state": "enabled", 00:15:48.926 "thread": "nvmf_tgt_poll_group_000", 00:15:48.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:48.926 "listen_address": { 00:15:48.926 "trtype": "RDMA", 00:15:48.926 "adrfam": "IPv4", 00:15:48.926 "traddr": "192.168.100.8", 00:15:48.926 "trsvcid": "4420" 00:15:48.926 }, 00:15:48.926 "peer_address": { 00:15:48.926 "trtype": "RDMA", 00:15:48.926 "adrfam": "IPv4", 00:15:48.926 "traddr": "192.168.100.8", 00:15:48.926 "trsvcid": "43318" 00:15:48.926 }, 00:15:48.926 "auth": { 00:15:48.926 "state": "completed", 00:15:48.926 "digest": "sha512", 00:15:48.926 "dhgroup": "ffdhe4096" 00:15:48.926 } 00:15:48.926 } 00:15:48.926 ]' 00:15:48.926 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.926 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:48.926 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.183 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:49.183 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.183 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.183 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.183 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.441 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:49.441 04:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:50.006 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.006 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:50.006 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.006 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.006 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.006 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.006 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:50.006 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:50.264 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:50.264 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.264 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:50.264 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:50.264 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:50.264 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.264 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:15:50.264 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.264 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.264 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.264 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:50.264 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.264 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.521 00:15:50.521 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.521 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.521 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.779 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.779 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.779 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.779 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.779 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.779 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.779 { 00:15:50.779 "cntlid": 127, 00:15:50.779 "qid": 0, 00:15:50.779 "state": "enabled", 00:15:50.779 "thread": "nvmf_tgt_poll_group_000", 00:15:50.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:50.779 "listen_address": { 00:15:50.779 "trtype": "RDMA", 00:15:50.779 "adrfam": "IPv4", 00:15:50.779 "traddr": "192.168.100.8", 00:15:50.779 "trsvcid": "4420" 00:15:50.779 }, 00:15:50.779 "peer_address": { 00:15:50.779 "trtype": "RDMA", 00:15:50.779 "adrfam": "IPv4", 00:15:50.779 "traddr": "192.168.100.8", 00:15:50.779 "trsvcid": "58569" 00:15:50.779 }, 00:15:50.779 "auth": { 00:15:50.779 "state": "completed", 00:15:50.779 "digest": "sha512", 00:15:50.779 "dhgroup": "ffdhe4096" 00:15:50.779 } 00:15:50.779 } 00:15:50.779 ]' 00:15:50.779 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.779 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.779 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.779 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:50.779 04:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.779 04:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.779 04:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.779 04:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.040 04:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:51.040 04:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:51.603 04:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.603 04:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:51.604 04:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.604 04:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.604 04:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.604 04:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.604 04:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.604 04:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:51.604 04:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:51.861 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:51.861 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.861 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:51.861 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:51.861 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:51.861 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.861 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.861 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.861 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.861 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.861 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.861 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.861 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.119 00:15:52.119 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.119 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.119 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.377 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.377 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.377 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.377 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.377 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.377 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.377 { 00:15:52.377 "cntlid": 129, 00:15:52.377 "qid": 0, 00:15:52.377 "state": "enabled", 00:15:52.377 "thread": "nvmf_tgt_poll_group_000", 00:15:52.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:52.377 "listen_address": { 00:15:52.377 "trtype": "RDMA", 00:15:52.377 "adrfam": "IPv4", 00:15:52.377 "traddr": "192.168.100.8", 00:15:52.377 "trsvcid": "4420" 00:15:52.377 }, 00:15:52.377 "peer_address": { 00:15:52.377 "trtype": "RDMA", 00:15:52.377 "adrfam": "IPv4", 00:15:52.377 "traddr": "192.168.100.8", 00:15:52.377 "trsvcid": "38467" 00:15:52.377 }, 00:15:52.377 "auth": { 00:15:52.377 "state": "completed", 00:15:52.377 "digest": "sha512", 00:15:52.377 "dhgroup": "ffdhe6144" 00:15:52.377 } 00:15:52.377 } 00:15:52.377 ]' 00:15:52.377 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.377 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.377 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.377 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:52.377 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.377 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.377 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.377 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.634 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:52.634 04:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:53.199 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.456 04:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.022 00:15:54.022 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.022 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.022 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.022 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.022 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.022 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.022 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.022 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.022 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.022 { 00:15:54.022 "cntlid": 131, 00:15:54.022 "qid": 0, 00:15:54.022 "state": "enabled", 00:15:54.022 "thread": "nvmf_tgt_poll_group_000", 00:15:54.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:54.022 "listen_address": { 00:15:54.022 "trtype": "RDMA", 00:15:54.022 "adrfam": "IPv4", 00:15:54.022 "traddr": "192.168.100.8", 00:15:54.022 "trsvcid": "4420" 00:15:54.022 }, 00:15:54.022 "peer_address": { 00:15:54.022 "trtype": "RDMA", 00:15:54.022 "adrfam": "IPv4", 00:15:54.022 "traddr": "192.168.100.8", 00:15:54.022 "trsvcid": "34861" 00:15:54.022 }, 00:15:54.022 "auth": { 00:15:54.022 "state": "completed", 00:15:54.022 "digest": "sha512", 00:15:54.022 "dhgroup": "ffdhe6144" 00:15:54.022 } 00:15:54.022 } 00:15:54.022 ]' 00:15:54.022 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.022 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.022 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.280 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:54.280 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.280 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.280 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.280 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.280 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:54.280 04:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.213 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.778 00:15:55.778 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.778 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.778 04:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.778 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.778 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.778 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.778 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.778 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.778 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.778 { 00:15:55.778 "cntlid": 133, 00:15:55.778 "qid": 0, 00:15:55.778 "state": "enabled", 00:15:55.778 "thread": "nvmf_tgt_poll_group_000", 00:15:55.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:55.778 "listen_address": { 00:15:55.778 "trtype": "RDMA", 00:15:55.778 "adrfam": "IPv4", 00:15:55.778 "traddr": "192.168.100.8", 00:15:55.778 "trsvcid": "4420" 00:15:55.778 }, 00:15:55.778 "peer_address": { 00:15:55.778 "trtype": "RDMA", 00:15:55.778 "adrfam": "IPv4", 00:15:55.778 "traddr": "192.168.100.8", 00:15:55.778 "trsvcid": "59165" 00:15:55.778 }, 00:15:55.778 "auth": { 00:15:55.778 "state": "completed", 00:15:55.778 "digest": "sha512", 00:15:55.778 "dhgroup": "ffdhe6144" 00:15:55.778 } 00:15:55.778 } 00:15:55.778 ]' 00:15:55.778 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.778 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.778 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.778 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:55.778 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.036 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.036 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.036 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.036 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:56.036 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:15:56.601 04:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.859 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:56.859 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.859 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.859 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.859 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.859 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:56.859 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:57.117 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:57.117 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.117 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:57.117 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:57.117 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:57.117 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.117 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:15:57.117 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.117 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.117 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.117 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:57.117 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.117 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.374 00:15:57.374 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.374 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.374 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.374 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.632 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.632 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.632 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.632 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.632 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.632 { 00:15:57.632 "cntlid": 135, 00:15:57.632 "qid": 0, 00:15:57.632 "state": "enabled", 00:15:57.632 "thread": "nvmf_tgt_poll_group_000", 00:15:57.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:57.632 "listen_address": { 00:15:57.632 "trtype": "RDMA", 00:15:57.632 "adrfam": "IPv4", 00:15:57.632 "traddr": "192.168.100.8", 00:15:57.632 "trsvcid": "4420" 00:15:57.632 }, 00:15:57.632 "peer_address": { 00:15:57.632 "trtype": "RDMA", 00:15:57.632 "adrfam": "IPv4", 00:15:57.632 "traddr": "192.168.100.8", 00:15:57.632 "trsvcid": "40912" 00:15:57.632 }, 00:15:57.632 "auth": { 00:15:57.632 "state": "completed", 00:15:57.632 "digest": "sha512", 00:15:57.632 "dhgroup": "ffdhe6144" 00:15:57.632 } 00:15:57.632 } 00:15:57.632 ]' 00:15:57.632 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.632 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.632 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.632 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:57.632 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.632 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.632 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.632 04:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.890 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:57.890 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:15:58.454 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.454 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:58.454 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.454 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.455 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.455 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.455 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.455 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:58.455 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:58.712 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:58.712 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.712 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:58.712 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:58.712 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:58.712 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.712 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.712 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.712 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.712 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.712 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.712 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.712 04:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.970 00:15:59.229 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.229 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.229 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.229 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.229 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.229 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.229 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.229 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.229 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.229 { 00:15:59.229 "cntlid": 137, 00:15:59.229 "qid": 0, 00:15:59.229 "state": "enabled", 00:15:59.229 "thread": "nvmf_tgt_poll_group_000", 00:15:59.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:15:59.229 "listen_address": { 00:15:59.229 "trtype": "RDMA", 00:15:59.229 "adrfam": "IPv4", 00:15:59.229 "traddr": "192.168.100.8", 00:15:59.229 "trsvcid": "4420" 00:15:59.229 }, 00:15:59.229 "peer_address": { 00:15:59.229 "trtype": "RDMA", 00:15:59.230 "adrfam": "IPv4", 00:15:59.230 "traddr": "192.168.100.8", 00:15:59.230 "trsvcid": "57045" 00:15:59.230 }, 00:15:59.230 "auth": { 00:15:59.230 "state": "completed", 00:15:59.230 "digest": "sha512", 00:15:59.230 "dhgroup": "ffdhe8192" 00:15:59.230 } 00:15:59.230 } 00:15:59.230 ]' 00:15:59.230 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.230 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.230 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.487 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:59.487 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.487 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.487 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.487 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.487 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:15:59.487 04:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.419 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.420 04:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.984 00:16:00.984 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.984 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.984 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.241 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.241 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.241 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.241 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.241 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.241 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.241 { 00:16:01.241 "cntlid": 139, 00:16:01.241 "qid": 0, 00:16:01.241 "state": "enabled", 00:16:01.241 "thread": "nvmf_tgt_poll_group_000", 00:16:01.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:16:01.241 "listen_address": { 00:16:01.241 "trtype": "RDMA", 00:16:01.241 "adrfam": "IPv4", 00:16:01.241 "traddr": "192.168.100.8", 00:16:01.241 "trsvcid": "4420" 00:16:01.241 }, 00:16:01.241 "peer_address": { 00:16:01.241 "trtype": "RDMA", 00:16:01.241 "adrfam": "IPv4", 00:16:01.241 "traddr": "192.168.100.8", 00:16:01.241 "trsvcid": "39219" 00:16:01.241 }, 00:16:01.241 "auth": { 00:16:01.241 "state": "completed", 00:16:01.241 "digest": "sha512", 00:16:01.241 "dhgroup": "ffdhe8192" 00:16:01.241 } 00:16:01.241 } 00:16:01.241 ]' 00:16:01.241 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.241 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.241 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.241 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:01.241 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.241 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.241 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.241 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.499 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:16:01.499 04:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: --dhchap-ctrl-secret DHHC-1:02:NjdiZGVjZGE4YjcxMmE5ZWY2ZjIwNGM0NjljYzA4N2RjMjExYjIxZDM1ZTYwMDZkpoHR+A==: 00:16:02.064 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.064 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:02.064 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.064 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.064 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.064 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.064 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:02.064 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:02.322 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:02.322 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.322 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:02.322 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:02.322 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.322 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.322 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.322 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.322 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.322 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.322 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.322 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.322 04:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.887 00:16:02.887 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.887 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.887 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.887 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.887 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.887 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.887 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.887 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.887 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.887 { 00:16:02.887 "cntlid": 141, 00:16:02.887 "qid": 0, 00:16:02.887 "state": "enabled", 00:16:02.887 "thread": "nvmf_tgt_poll_group_000", 00:16:02.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:16:02.887 "listen_address": { 00:16:02.887 "trtype": "RDMA", 00:16:02.887 "adrfam": "IPv4", 00:16:02.887 "traddr": "192.168.100.8", 00:16:02.887 "trsvcid": "4420" 00:16:02.887 }, 00:16:02.887 "peer_address": { 00:16:02.887 "trtype": "RDMA", 00:16:02.887 "adrfam": "IPv4", 00:16:02.887 "traddr": "192.168.100.8", 00:16:02.887 "trsvcid": "36247" 00:16:02.887 }, 00:16:02.887 "auth": { 00:16:02.887 "state": "completed", 00:16:02.887 "digest": "sha512", 00:16:02.887 "dhgroup": "ffdhe8192" 00:16:02.887 } 00:16:02.887 } 00:16:02.887 ]' 00:16:02.887 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.145 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.145 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.145 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:03.145 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.145 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.145 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.145 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.403 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:16:03.403 04:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:01:NDNhODBhNzI1M2E4NDk2ZjIyN2ViZmUyN2EyNTEzNjAS60rX: 00:16:04.012 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.012 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:04.012 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.012 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.012 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.012 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.012 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:04.012 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:04.269 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:04.269 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.269 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:04.269 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:04.269 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.269 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.269 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:16:04.269 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.269 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.269 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.269 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.269 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.269 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.526 00:16:04.786 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.786 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.786 04:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.786 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.786 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.786 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.786 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.786 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.786 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.786 { 00:16:04.786 "cntlid": 143, 00:16:04.786 "qid": 0, 00:16:04.786 "state": "enabled", 00:16:04.786 "thread": "nvmf_tgt_poll_group_000", 00:16:04.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:16:04.786 "listen_address": { 00:16:04.786 "trtype": "RDMA", 00:16:04.786 "adrfam": "IPv4", 00:16:04.786 "traddr": "192.168.100.8", 00:16:04.786 "trsvcid": "4420" 00:16:04.786 }, 00:16:04.786 "peer_address": { 00:16:04.786 "trtype": "RDMA", 00:16:04.786 "adrfam": "IPv4", 00:16:04.786 "traddr": "192.168.100.8", 00:16:04.786 "trsvcid": "37510" 00:16:04.786 }, 00:16:04.786 "auth": { 00:16:04.786 "state": "completed", 00:16:04.786 "digest": "sha512", 00:16:04.786 "dhgroup": "ffdhe8192" 00:16:04.786 } 00:16:04.786 } 00:16:04.786 ]' 00:16:04.786 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.786 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.786 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.044 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:05.044 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.045 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.045 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.045 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.045 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:16:05.045 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:16:05.609 04:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.867 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:05.867 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.867 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.867 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.867 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:05.867 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:05.867 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:05.867 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:05.867 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:05.867 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:06.124 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:06.124 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.124 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:06.125 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:06.125 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.125 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.125 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.125 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.125 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.125 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.125 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.125 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.125 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.382 00:16:06.640 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.640 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.640 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.640 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.640 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.640 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.640 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.640 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.640 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.640 { 00:16:06.640 "cntlid": 145, 00:16:06.640 "qid": 0, 00:16:06.640 "state": "enabled", 00:16:06.640 "thread": "nvmf_tgt_poll_group_000", 00:16:06.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:16:06.640 "listen_address": { 00:16:06.640 "trtype": "RDMA", 00:16:06.640 "adrfam": "IPv4", 00:16:06.640 "traddr": "192.168.100.8", 00:16:06.640 "trsvcid": "4420" 00:16:06.640 }, 00:16:06.640 "peer_address": { 00:16:06.640 "trtype": "RDMA", 00:16:06.640 "adrfam": "IPv4", 00:16:06.640 "traddr": "192.168.100.8", 00:16:06.640 "trsvcid": "45682" 00:16:06.640 }, 00:16:06.640 "auth": { 00:16:06.640 "state": "completed", 00:16:06.640 "digest": "sha512", 00:16:06.640 "dhgroup": "ffdhe8192" 00:16:06.640 } 00:16:06.640 } 00:16:06.640 ]' 00:16:06.640 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.640 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.640 04:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.897 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:06.897 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.897 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.897 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.897 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.898 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:16:06.898 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTI1ODlhNjIyM2E2OWZmNGJhODZiOTE3NWU1NjNjYzQ0ZjAzM2FjZWEwOTU3NDY1t0LOKw==: --dhchap-ctrl-secret DHHC-1:03:MTQ1MTgwZjg2NTM3ODM1NDM3ZTFkNWYwNzk1OWZhOTA4NjcyYzNmZGY3OWQzYTZmY2JmNDA1ZTNiZTMyZTE1MrGNd6Y=: 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:07.829 04:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:08.087 request: 00:16:08.087 { 00:16:08.087 "name": "nvme0", 00:16:08.087 "trtype": "rdma", 00:16:08.087 "traddr": "192.168.100.8", 00:16:08.087 "adrfam": "ipv4", 00:16:08.087 "trsvcid": "4420", 00:16:08.087 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:08.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:16:08.087 "prchk_reftag": false, 00:16:08.087 "prchk_guard": false, 00:16:08.087 "hdgst": false, 00:16:08.087 "ddgst": false, 00:16:08.087 "dhchap_key": "key2", 00:16:08.087 "allow_unrecognized_csi": false, 00:16:08.087 "method": "bdev_nvme_attach_controller", 00:16:08.087 "req_id": 1 00:16:08.087 } 00:16:08.087 Got JSON-RPC error response 00:16:08.087 response: 00:16:08.087 { 00:16:08.087 "code": -5, 00:16:08.087 "message": "Input/output error" 00:16:08.087 } 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:08.087 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:08.653 request: 00:16:08.653 { 00:16:08.653 "name": "nvme0", 00:16:08.653 "trtype": "rdma", 00:16:08.653 "traddr": "192.168.100.8", 00:16:08.653 "adrfam": "ipv4", 00:16:08.653 "trsvcid": "4420", 00:16:08.653 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:08.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:16:08.653 "prchk_reftag": false, 00:16:08.653 "prchk_guard": false, 00:16:08.653 "hdgst": false, 00:16:08.653 "ddgst": false, 00:16:08.653 "dhchap_key": "key1", 00:16:08.653 "dhchap_ctrlr_key": "ckey2", 00:16:08.653 "allow_unrecognized_csi": false, 00:16:08.653 "method": "bdev_nvme_attach_controller", 00:16:08.653 "req_id": 1 00:16:08.653 } 00:16:08.653 Got JSON-RPC error response 00:16:08.653 response: 00:16:08.653 { 00:16:08.653 "code": -5, 00:16:08.653 "message": "Input/output error" 00:16:08.653 } 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.653 04:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.218 request: 00:16:09.218 { 00:16:09.218 "name": "nvme0", 00:16:09.218 "trtype": "rdma", 00:16:09.218 "traddr": "192.168.100.8", 00:16:09.218 "adrfam": "ipv4", 00:16:09.218 "trsvcid": "4420", 00:16:09.218 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:09.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:16:09.218 "prchk_reftag": false, 00:16:09.218 "prchk_guard": false, 00:16:09.218 "hdgst": false, 00:16:09.218 "ddgst": false, 00:16:09.218 "dhchap_key": "key1", 00:16:09.218 "dhchap_ctrlr_key": "ckey1", 00:16:09.218 "allow_unrecognized_csi": false, 00:16:09.218 "method": "bdev_nvme_attach_controller", 00:16:09.218 "req_id": 1 00:16:09.218 } 00:16:09.218 Got JSON-RPC error response 00:16:09.218 response: 00:16:09.218 { 00:16:09.218 "code": -5, 00:16:09.218 "message": "Input/output error" 00:16:09.218 } 00:16:09.218 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:09.218 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:09.218 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:09.218 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:09.218 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:09.218 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.218 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 745482 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 745482 ']' 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 745482 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 745482 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 745482' 00:16:09.219 killing process with pid 745482 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 745482 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 745482 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:09.219 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=770260 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 770260 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 770260 ']' 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 770260 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 770260 ']' 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.477 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.735 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.735 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:09.735 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:09.735 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.735 04:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.735 null0 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3Iw 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.iu8 ]] 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iu8 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.iBz 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.i7Q ]] 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.i7Q 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.M5c 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.cP0 ]] 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cP0 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1JF 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.993 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.558 nvme0n1 00:16:10.816 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.816 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.816 04:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.816 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.816 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.816 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.816 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.816 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.816 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.816 { 00:16:10.816 "cntlid": 1, 00:16:10.816 "qid": 0, 00:16:10.816 "state": "enabled", 00:16:10.816 "thread": "nvmf_tgt_poll_group_000", 00:16:10.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:16:10.816 "listen_address": { 00:16:10.816 "trtype": "RDMA", 00:16:10.816 "adrfam": "IPv4", 00:16:10.816 "traddr": "192.168.100.8", 00:16:10.816 "trsvcid": "4420" 00:16:10.816 }, 00:16:10.816 "peer_address": { 00:16:10.816 "trtype": "RDMA", 00:16:10.816 "adrfam": "IPv4", 00:16:10.816 "traddr": "192.168.100.8", 00:16:10.816 "trsvcid": "59156" 00:16:10.816 }, 00:16:10.816 "auth": { 00:16:10.816 "state": "completed", 00:16:10.816 "digest": "sha512", 00:16:10.816 "dhgroup": "ffdhe8192" 00:16:10.816 } 00:16:10.816 } 00:16:10.816 ]' 00:16:10.816 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.816 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.816 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.073 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:11.073 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.073 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.073 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.073 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.073 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:16:11.073 04:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:16:11.637 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.895 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:11.895 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.895 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.895 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.895 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:16:11.895 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.895 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.895 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.895 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:11.895 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:12.152 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:12.152 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:12.152 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:12.152 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:12.152 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.152 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:12.152 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.152 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:12.152 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.152 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.411 request: 00:16:12.411 { 00:16:12.411 "name": "nvme0", 00:16:12.411 "trtype": "rdma", 00:16:12.411 "traddr": "192.168.100.8", 00:16:12.411 "adrfam": "ipv4", 00:16:12.411 "trsvcid": "4420", 00:16:12.411 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:12.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:16:12.411 "prchk_reftag": false, 00:16:12.411 "prchk_guard": false, 00:16:12.411 "hdgst": false, 00:16:12.411 "ddgst": false, 00:16:12.411 "dhchap_key": "key3", 00:16:12.411 "allow_unrecognized_csi": false, 00:16:12.411 "method": "bdev_nvme_attach_controller", 00:16:12.411 "req_id": 1 00:16:12.411 } 00:16:12.411 Got JSON-RPC error response 00:16:12.411 response: 00:16:12.411 { 00:16:12.411 "code": -5, 00:16:12.411 "message": "Input/output error" 00:16:12.411 } 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.411 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.669 request: 00:16:12.669 { 00:16:12.669 "name": "nvme0", 00:16:12.669 "trtype": "rdma", 00:16:12.669 "traddr": "192.168.100.8", 00:16:12.669 "adrfam": "ipv4", 00:16:12.669 "trsvcid": "4420", 00:16:12.669 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:12.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:16:12.669 "prchk_reftag": false, 00:16:12.669 "prchk_guard": false, 00:16:12.669 "hdgst": false, 00:16:12.669 "ddgst": false, 00:16:12.669 "dhchap_key": "key3", 00:16:12.669 "allow_unrecognized_csi": false, 00:16:12.669 "method": "bdev_nvme_attach_controller", 00:16:12.669 "req_id": 1 00:16:12.669 } 00:16:12.669 Got JSON-RPC error response 00:16:12.669 response: 00:16:12.669 { 00:16:12.669 "code": -5, 00:16:12.669 "message": "Input/output error" 00:16:12.669 } 00:16:12.669 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:12.669 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:12.669 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:12.669 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:12.669 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:12.669 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:12.669 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:12.669 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:12.669 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:12.669 04:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:12.927 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:13.184 request: 00:16:13.184 { 00:16:13.184 "name": "nvme0", 00:16:13.184 "trtype": "rdma", 00:16:13.184 "traddr": "192.168.100.8", 00:16:13.184 "adrfam": "ipv4", 00:16:13.184 "trsvcid": "4420", 00:16:13.184 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:13.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:16:13.184 "prchk_reftag": false, 00:16:13.184 "prchk_guard": false, 00:16:13.184 "hdgst": false, 00:16:13.184 "ddgst": false, 00:16:13.184 "dhchap_key": "key0", 00:16:13.184 "dhchap_ctrlr_key": "key1", 00:16:13.184 "allow_unrecognized_csi": false, 00:16:13.184 "method": "bdev_nvme_attach_controller", 00:16:13.184 "req_id": 1 00:16:13.184 } 00:16:13.184 Got JSON-RPC error response 00:16:13.184 response: 00:16:13.184 { 00:16:13.184 "code": -5, 00:16:13.184 "message": "Input/output error" 00:16:13.184 } 00:16:13.184 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:13.184 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:13.184 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:13.184 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:13.184 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:13.184 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:13.184 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:13.442 nvme0n1 00:16:13.442 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:13.442 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:13.442 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.700 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.700 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.700 04:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.957 04:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 00:16:13.957 04:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.957 04:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.957 04:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.957 04:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:13.957 04:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:13.957 04:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:14.522 nvme0n1 00:16:14.522 04:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:14.522 04:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:14.522 04:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.780 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.780 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:14.780 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.780 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.780 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.780 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:14.780 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:14.780 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.038 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.038 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:16:15.038 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: --dhchap-ctrl-secret DHHC-1:03:MWY3NTJkMTFkMTBiYjVmNDU1NmE1YWE3ZTAzOWVkZDNhN2VlYjU0YzQ2YzA3Y2U4ZDgzNWNjN2ZlY2IxODQ4MbYMAV8=: 00:16:15.603 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:15.603 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:15.603 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:15.603 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:15.603 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:15.603 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:15.603 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:15.603 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.603 04:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.860 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:15.860 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:15.860 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:15.860 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:15.860 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.860 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:15.860 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.860 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:15.860 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:15.860 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:16.117 request: 00:16:16.117 { 00:16:16.117 "name": "nvme0", 00:16:16.117 "trtype": "rdma", 00:16:16.117 "traddr": "192.168.100.8", 00:16:16.117 "adrfam": "ipv4", 00:16:16.117 "trsvcid": "4420", 00:16:16.117 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:16.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:16:16.117 "prchk_reftag": false, 00:16:16.117 "prchk_guard": false, 00:16:16.117 "hdgst": false, 00:16:16.117 "ddgst": false, 00:16:16.117 "dhchap_key": "key1", 00:16:16.117 "allow_unrecognized_csi": false, 00:16:16.117 "method": "bdev_nvme_attach_controller", 00:16:16.117 "req_id": 1 00:16:16.117 } 00:16:16.117 Got JSON-RPC error response 00:16:16.117 response: 00:16:16.117 { 00:16:16.117 "code": -5, 00:16:16.117 "message": "Input/output error" 00:16:16.117 } 00:16:16.117 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:16.117 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:16.117 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:16.117 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:16.117 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:16.117 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:16.117 04:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:17.050 nvme0n1 00:16:17.050 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:17.050 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:17.050 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.050 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.050 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.050 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.308 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:17.308 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.308 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.308 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.308 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:17.308 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:17.308 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:17.566 nvme0n1 00:16:17.566 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:17.566 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:17.566 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.824 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.824 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.824 04:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: '' 2s 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: ]] 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmIzYjhiMDQ3NjJmZGUzZDY1NzNjNWVkN2E1NmIzYjgf3UZ9: 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:17.824 04:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: 2s 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: ]] 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZGFiMmM5MDA5NmNkZTcxOTExYmQxYTVlZTgyN2Y4MDk0OWNlNTM5OTJhYWVlOTMwOGN7gA==: 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:20.348 04:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:22.244 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:22.244 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:22.244 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:22.244 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:22.244 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:22.244 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:22.244 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:22.244 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.244 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:22.244 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.244 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.244 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.244 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:22.245 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:22.245 04:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:22.809 nvme0n1 00:16:22.810 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:22.810 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.810 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.810 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.810 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:22.810 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:23.375 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:23.375 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:23.375 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.375 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.375 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:23.375 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.375 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.375 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.375 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:23.375 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:23.632 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:23.632 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:23.632 04:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.889 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.889 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:23.889 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.889 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.889 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.889 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:23.889 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:23.889 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:23.889 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:23.889 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:23.889 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:23.889 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:23.889 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:23.889 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:24.146 request: 00:16:24.146 { 00:16:24.146 "name": "nvme0", 00:16:24.146 "dhchap_key": "key1", 00:16:24.146 "dhchap_ctrlr_key": "key3", 00:16:24.146 "method": "bdev_nvme_set_keys", 00:16:24.146 "req_id": 1 00:16:24.146 } 00:16:24.146 Got JSON-RPC error response 00:16:24.146 response: 00:16:24.146 { 00:16:24.146 "code": -13, 00:16:24.146 "message": "Permission denied" 00:16:24.146 } 00:16:24.146 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:24.146 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:24.146 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:24.146 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:24.146 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:24.146 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:24.146 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.403 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:24.403 04:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:16:25.336 04:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:25.336 04:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:25.336 04:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.593 04:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:16:25.593 04:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:25.593 04:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.593 04:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.593 04:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.593 04:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:25.593 04:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:25.593 04:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:26.158 nvme0n1 00:16:26.416 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:26.416 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.416 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.416 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.416 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:26.416 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:26.416 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:26.416 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:26.416 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:26.416 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:26.416 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:26.416 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:26.416 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:26.674 request: 00:16:26.674 { 00:16:26.674 "name": "nvme0", 00:16:26.674 "dhchap_key": "key2", 00:16:26.674 "dhchap_ctrlr_key": "key0", 00:16:26.674 "method": "bdev_nvme_set_keys", 00:16:26.674 "req_id": 1 00:16:26.674 } 00:16:26.674 Got JSON-RPC error response 00:16:26.674 response: 00:16:26.674 { 00:16:26.674 "code": -13, 00:16:26.674 "message": "Permission denied" 00:16:26.674 } 00:16:26.674 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:26.674 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:26.674 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:26.674 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:26.674 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:26.674 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:26.674 04:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.932 04:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:16:26.932 04:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:16:27.866 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:27.866 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:27.866 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.124 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:16:28.124 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:16:28.124 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:16:28.124 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 745504 00:16:28.124 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 745504 ']' 00:16:28.124 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 745504 00:16:28.124 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:28.124 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.124 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 745504 00:16:28.124 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:28.124 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:28.124 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 745504' 00:16:28.124 killing process with pid 745504 00:16:28.124 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 745504 00:16:28.124 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 745504 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:28.382 rmmod nvme_rdma 00:16:28.382 rmmod nvme_fabrics 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 770260 ']' 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 770260 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 770260 ']' 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 770260 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.382 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770260 00:16:28.640 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.640 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.640 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770260' 00:16:28.640 killing process with pid 770260 00:16:28.640 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 770260 00:16:28.640 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 770260 00:16:28.640 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:28.640 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:28.640 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.3Iw /tmp/spdk.key-sha256.iBz /tmp/spdk.key-sha384.M5c /tmp/spdk.key-sha512.1JF /tmp/spdk.key-sha512.iu8 /tmp/spdk.key-sha384.i7Q /tmp/spdk.key-sha256.cP0 '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:16:28.640 00:16:28.640 real 2m32.518s 00:16:28.640 user 5m52.613s 00:16:28.640 sys 0m19.784s 00:16:28.640 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.640 04:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.640 ************************************ 00:16:28.640 END TEST nvmf_auth_target 00:16:28.640 ************************************ 00:16:28.640 04:04:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:16:28.640 04:04:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:16:28.640 04:04:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:16:28.640 04:04:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:16:28.640 04:04:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:16:28.640 04:04:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:16:28.640 04:04:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:28.640 04:04:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.640 04:04:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:28.898 ************************************ 00:16:28.898 START TEST nvmf_srq_overwhelm 00:16:28.898 ************************************ 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:16:28.899 * Looking for test storage... 00:16:28.899 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lcov --version 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:28.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.899 --rc genhtml_branch_coverage=1 00:16:28.899 --rc genhtml_function_coverage=1 00:16:28.899 --rc genhtml_legend=1 00:16:28.899 --rc geninfo_all_blocks=1 00:16:28.899 --rc geninfo_unexecuted_blocks=1 00:16:28.899 00:16:28.899 ' 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:28.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.899 --rc genhtml_branch_coverage=1 00:16:28.899 --rc genhtml_function_coverage=1 00:16:28.899 --rc genhtml_legend=1 00:16:28.899 --rc geninfo_all_blocks=1 00:16:28.899 --rc geninfo_unexecuted_blocks=1 00:16:28.899 00:16:28.899 ' 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:28.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.899 --rc genhtml_branch_coverage=1 00:16:28.899 --rc genhtml_function_coverage=1 00:16:28.899 --rc genhtml_legend=1 00:16:28.899 --rc geninfo_all_blocks=1 00:16:28.899 --rc geninfo_unexecuted_blocks=1 00:16:28.899 00:16:28.899 ' 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:28.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.899 --rc genhtml_branch_coverage=1 00:16:28.899 --rc genhtml_function_coverage=1 00:16:28.899 --rc genhtml_legend=1 00:16:28.899 --rc geninfo_all_blocks=1 00:16:28.899 --rc geninfo_unexecuted_blocks=1 00:16:28.899 00:16:28.899 ' 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.899 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:28.899 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:28.900 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:16:28.900 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:16:28.900 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:28.900 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.900 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:28.900 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:28.900 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:28.900 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.900 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.900 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.900 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:28.900 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:28.900 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:16:28.900 04:04:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:35.459 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:35.460 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:35.460 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:35.460 Found net devices under 0000:18:00.0: mlx_0_0 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:35.460 Found net devices under 0000:18:00.1: mlx_0_1 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:35.460 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:35.460 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:35.460 altname enp24s0f0np0 00:16:35.460 altname ens785f0np0 00:16:35.460 inet 192.168.100.8/24 scope global mlx_0_0 00:16:35.460 valid_lft forever preferred_lft forever 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:35.460 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:35.460 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:35.460 altname enp24s0f1np1 00:16:35.460 altname ens785f1np1 00:16:35.460 inet 192.168.100.9/24 scope global mlx_0_1 00:16:35.460 valid_lft forever preferred_lft forever 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:35.460 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:35.461 192.168.100.9' 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:35.461 192.168.100.9' 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:35.461 192.168.100.9' 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=777135 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 777135 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 777135 ']' 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.461 04:04:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:35.461 [2024-12-10 04:04:29.041714] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:16:35.461 [2024-12-10 04:04:29.041758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.461 [2024-12-10 04:04:29.099211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.461 [2024-12-10 04:04:29.139545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.461 [2024-12-10 04:04:29.139579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.461 [2024-12-10 04:04:29.139586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.461 [2024-12-10 04:04:29.139592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.461 [2024-12-10 04:04:29.139596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.461 [2024-12-10 04:04:29.140851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.461 [2024-12-10 04:04:29.140945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.461 [2024-12-10 04:04:29.141020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.461 [2024-12-10 04:04:29.141022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:35.461 [2024-12-10 04:04:29.295563] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x199d0c0/0x19a15b0) succeed. 00:16:35.461 [2024-12-10 04:04:29.303813] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x199e750/0x19e2c50) succeed. 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:35.461 Malloc0 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:35.461 [2024-12-10 04:04:29.403780] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.461 04:04:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.028 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:36.287 Malloc1 00:16:36.287 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.287 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:36.287 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.287 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:36.287 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.287 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:36.287 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.287 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:36.287 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.287 04:04:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:37.260 Malloc2 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:37.260 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.261 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:16:37.261 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.261 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:37.261 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.261 04:04:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:16:38.233 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:38.234 Malloc3 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.234 04:04:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:39.170 Malloc4 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.170 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:39.429 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.429 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:16:39.429 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.429 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:39.429 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.429 04:04:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:40.365 Malloc5 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.365 04:04:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:16:41.300 04:04:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:16:41.300 04:04:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:41.300 04:04:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:41.300 04:04:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:16:41.300 04:04:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:16:41.300 04:04:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:41.300 04:04:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:41.300 04:04:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:16:41.300 [global] 00:16:41.300 thread=1 00:16:41.300 invalidate=1 00:16:41.300 rw=read 00:16:41.300 time_based=1 00:16:41.300 runtime=10 00:16:41.300 ioengine=libaio 00:16:41.300 direct=1 00:16:41.300 bs=1048576 00:16:41.300 iodepth=128 00:16:41.300 norandommap=1 00:16:41.300 numjobs=13 00:16:41.300 00:16:41.300 [job0] 00:16:41.300 filename=/dev/nvme0n1 00:16:41.300 [job1] 00:16:41.300 filename=/dev/nvme1n1 00:16:41.300 [job2] 00:16:41.300 filename=/dev/nvme2n1 00:16:41.300 [job3] 00:16:41.300 filename=/dev/nvme3n1 00:16:41.300 [job4] 00:16:41.300 filename=/dev/nvme4n1 00:16:41.300 [job5] 00:16:41.300 filename=/dev/nvme5n1 00:16:41.578 Could not set queue depth (nvme0n1) 00:16:41.578 Could not set queue depth (nvme1n1) 00:16:41.578 Could not set queue depth (nvme2n1) 00:16:41.578 Could not set queue depth (nvme3n1) 00:16:41.578 Could not set queue depth (nvme4n1) 00:16:41.578 Could not set queue depth (nvme5n1) 00:16:41.840 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:41.840 ... 00:16:41.840 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:41.840 ... 00:16:41.840 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:41.840 ... 00:16:41.840 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:41.840 ... 00:16:41.840 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:41.840 ... 00:16:41.840 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:41.840 ... 00:16:41.840 fio-3.35 00:16:41.840 Starting 78 threads 00:16:54.042 00:16:54.042 job0: (groupid=0, jobs=1): err= 0: pid=778653: Tue Dec 10 04:04:47 2024 00:16:54.042 read: IOPS=6, BW=6610KiB/s (6769kB/s)(70.0MiB/10844msec) 00:16:54.042 slat (usec): min=1771, max=2091.0k, avg=154545.77, stdev=514457.15 00:16:54.042 clat (msec): min=25, max=10841, avg=5606.65, stdev=3910.89 00:16:54.042 lat (msec): min=1691, max=10843, avg=5761.20, stdev=3900.87 00:16:54.042 clat percentiles (msec): 00:16:54.042 | 1.00th=[ 26], 5.00th=[ 1720], 10.00th=[ 1770], 20.00th=[ 1871], 00:16:54.042 | 30.00th=[ 1972], 40.00th=[ 2089], 50.00th=[ 4245], 60.00th=[ 6409], 00:16:54.042 | 70.00th=[ 8557], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:16:54.042 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:16:54.042 | 99.99th=[10805] 00:16:54.042 lat (msec) : 50=1.43%, 2000=31.43%, >=2000=67.14% 00:16:54.042 cpu : usr=0.00%, sys=0.54%, ctx=140, majf=0, minf=17921 00:16:54.042 IO depths : 1=1.4%, 2=2.9%, 4=5.7%, 8=11.4%, 16=22.9%, 32=45.7%, >=64=10.0% 00:16:54.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.042 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:54.042 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.042 job0: (groupid=0, jobs=1): err= 0: pid=778654: Tue Dec 10 04:04:47 2024 00:16:54.042 read: IOPS=27, BW=27.6MiB/s (29.0MB/s)(300MiB/10855msec) 00:16:54.042 slat (usec): min=395, max=2050.6k, avg=36089.70, stdev=230709.43 00:16:54.042 clat (msec): min=27, max=10688, avg=3810.68, stdev=1686.01 00:16:54.042 lat (msec): min=774, max=10705, avg=3846.77, stdev=1709.81 00:16:54.042 clat percentiles (msec): 00:16:54.042 | 1.00th=[ 776], 5.00th=[ 776], 10.00th=[ 2022], 20.00th=[ 2869], 00:16:54.042 | 30.00th=[ 3641], 40.00th=[ 3876], 50.00th=[ 4010], 60.00th=[ 4010], 00:16:54.043 | 70.00th=[ 4044], 80.00th=[ 4111], 90.00th=[ 6275], 95.00th=[ 6342], 00:16:54.043 | 99.00th=[ 8557], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:54.043 | 99.99th=[10671] 00:16:54.043 bw ( KiB/s): min= 4096, max=110592, per=1.33%, avg=50322.29, stdev=37257.41, samples=7 00:16:54.043 iops : min= 4, max= 108, avg=49.14, stdev=36.38, samples=7 00:16:54.043 lat (msec) : 50=0.33%, 1000=8.33%, >=2000=91.33% 00:16:54.043 cpu : usr=0.01%, sys=0.93%, ctx=429, majf=0, minf=32769 00:16:54.043 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.3%, 32=10.7%, >=64=79.0% 00:16:54.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.043 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:16:54.043 issued rwts: total=300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.043 job0: (groupid=0, jobs=1): err= 0: pid=778655: Tue Dec 10 04:04:47 2024 00:16:54.043 read: IOPS=28, BW=28.9MiB/s (30.3MB/s)(313MiB/10847msec) 00:16:54.043 slat (usec): min=47, max=2157.2k, avg=34475.65, stdev=226620.17 00:16:54.043 clat (msec): min=54, max=9519, avg=4209.49, stdev=3649.33 00:16:54.043 lat (msec): min=777, max=9521, avg=4243.97, stdev=3651.82 00:16:54.043 clat percentiles (msec): 00:16:54.043 | 1.00th=[ 776], 5.00th=[ 844], 10.00th=[ 844], 20.00th=[ 944], 00:16:54.043 | 30.00th=[ 1003], 40.00th=[ 1133], 50.00th=[ 1888], 60.00th=[ 6409], 00:16:54.043 | 70.00th=[ 7752], 80.00th=[ 8792], 90.00th=[ 9194], 95.00th=[ 9463], 00:16:54.043 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:16:54.043 | 99.99th=[ 9463] 00:16:54.043 bw ( KiB/s): min= 2048, max=145408, per=1.25%, avg=47360.00, stdev=47356.94, samples=8 00:16:54.043 iops : min= 2, max= 142, avg=46.25, stdev=46.25, samples=8 00:16:54.043 lat (msec) : 100=0.32%, 1000=27.16%, 2000=26.52%, >=2000=46.01% 00:16:54.043 cpu : usr=0.01%, sys=1.12%, ctx=453, majf=0, minf=32769 00:16:54.043 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.1%, 32=10.2%, >=64=79.9% 00:16:54.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.043 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:54.043 issued rwts: total=313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.043 job0: (groupid=0, jobs=1): err= 0: pid=778656: Tue Dec 10 04:04:47 2024 00:16:54.043 read: IOPS=14, BW=14.3MiB/s (14.9MB/s)(155MiB/10872msec) 00:16:54.043 slat (usec): min=90, max=2082.1k, avg=69762.86, stdev=339505.91 00:16:54.043 clat (msec): min=57, max=10771, avg=4593.03, stdev=3687.80 00:16:54.043 lat (msec): min=1359, max=10778, avg=4662.80, stdev=3703.50 00:16:54.043 clat percentiles (msec): 00:16:54.043 | 1.00th=[ 1368], 5.00th=[ 1401], 10.00th=[ 1435], 20.00th=[ 1552], 00:16:54.043 | 30.00th=[ 1670], 40.00th=[ 1770], 50.00th=[ 1989], 60.00th=[ 4245], 00:16:54.043 | 70.00th=[ 6477], 80.00th=[ 9463], 90.00th=[10537], 95.00th=[10671], 00:16:54.043 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:16:54.043 | 99.99th=[10805] 00:16:54.043 bw ( KiB/s): min= 2048, max=53248, per=0.73%, avg=27648.00, stdev=36203.87, samples=2 00:16:54.043 iops : min= 2, max= 52, avg=27.00, stdev=35.36, samples=2 00:16:54.043 lat (msec) : 100=0.65%, 2000=49.68%, >=2000=49.68% 00:16:54.043 cpu : usr=0.00%, sys=0.97%, ctx=200, majf=0, minf=32769 00:16:54.043 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.2%, 16=10.3%, 32=20.6%, >=64=59.4% 00:16:54.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.043 complete : 0=0.0%, 4=96.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.4% 00:16:54.043 issued rwts: total=155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.043 job0: (groupid=0, jobs=1): err= 0: pid=778657: Tue Dec 10 04:04:47 2024 00:16:54.043 read: IOPS=16, BW=16.8MiB/s (17.6MB/s)(169MiB/10042msec) 00:16:54.043 slat (usec): min=94, max=2143.6k, avg=59169.94, stdev=318345.74 00:16:54.043 clat (msec): min=41, max=9776, avg=1743.42, stdev=2444.77 00:16:54.043 lat (msec): min=46, max=9795, avg=1802.59, stdev=2520.62 00:16:54.043 clat percentiles (msec): 00:16:54.043 | 1.00th=[ 46], 5.00th=[ 63], 10.00th=[ 105], 20.00th=[ 174], 00:16:54.043 | 30.00th=[ 351], 40.00th=[ 493], 50.00th=[ 709], 60.00th=[ 961], 00:16:54.043 | 70.00th=[ 1301], 80.00th=[ 3473], 90.00th=[ 5604], 95.00th=[ 7684], 00:16:54.043 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:16:54.043 | 99.99th=[ 9731] 00:16:54.043 bw ( KiB/s): min=85333, max=85333, per=2.25%, avg=85333.00, stdev= 0.00, samples=1 00:16:54.043 iops : min= 83, max= 83, avg=83.00, stdev= 0.00, samples=1 00:16:54.043 lat (msec) : 50=1.18%, 100=7.10%, 250=16.57%, 500=15.38%, 750=11.24% 00:16:54.043 lat (msec) : 1000=9.47%, 2000=18.93%, >=2000=20.12% 00:16:54.043 cpu : usr=0.00%, sys=0.78%, ctx=314, majf=0, minf=32769 00:16:54.043 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.7%, 16=9.5%, 32=18.9%, >=64=62.7% 00:16:54.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.043 complete : 0=0.0%, 4=97.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.3% 00:16:54.043 issued rwts: total=169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.043 job0: (groupid=0, jobs=1): err= 0: pid=778658: Tue Dec 10 04:04:47 2024 00:16:54.043 read: IOPS=14, BW=14.8MiB/s (15.5MB/s)(149MiB/10096msec) 00:16:54.043 slat (usec): min=82, max=2096.5k, avg=67123.87, stdev=333595.16 00:16:54.043 clat (msec): min=93, max=9886, avg=2597.25, stdev=3534.60 00:16:54.043 lat (msec): min=96, max=9889, avg=2664.37, stdev=3580.98 00:16:54.043 clat percentiles (msec): 00:16:54.043 | 1.00th=[ 96], 5.00th=[ 136], 10.00th=[ 186], 20.00th=[ 279], 00:16:54.043 | 30.00th=[ 372], 40.00th=[ 468], 50.00th=[ 735], 60.00th=[ 1133], 00:16:54.043 | 70.00th=[ 1401], 80.00th=[ 5671], 90.00th=[ 9866], 95.00th=[ 9866], 00:16:54.043 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:16:54.043 | 99.99th=[ 9866] 00:16:54.043 bw ( KiB/s): min=45056, max=45056, per=1.19%, avg=45056.00, stdev= 0.00, samples=1 00:16:54.043 iops : min= 44, max= 44, avg=44.00, stdev= 0.00, samples=1 00:16:54.043 lat (msec) : 100=1.34%, 250=16.11%, 500=25.50%, 750=7.38%, 1000=7.38% 00:16:54.043 lat (msec) : 2000=15.44%, >=2000=26.85% 00:16:54.043 cpu : usr=0.00%, sys=1.03%, ctx=265, majf=0, minf=32769 00:16:54.043 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=5.4%, 16=10.7%, 32=21.5%, >=64=57.7% 00:16:54.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.043 complete : 0=0.0%, 4=95.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.3% 00:16:54.043 issued rwts: total=149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.043 job0: (groupid=0, jobs=1): err= 0: pid=778659: Tue Dec 10 04:04:47 2024 00:16:54.043 read: IOPS=152, BW=153MiB/s (160MB/s)(1653MiB/10838msec) 00:16:54.043 slat (usec): min=27, max=1997.6k, avg=6537.50, stdev=57430.39 00:16:54.043 clat (msec): min=27, max=3724, avg=804.76, stdev=763.94 00:16:54.043 lat (msec): min=202, max=3752, avg=811.30, stdev=766.40 00:16:54.043 clat percentiles (msec): 00:16:54.043 | 1.00th=[ 218], 5.00th=[ 230], 10.00th=[ 245], 20.00th=[ 347], 00:16:54.043 | 30.00th=[ 518], 40.00th=[ 600], 50.00th=[ 651], 60.00th=[ 735], 00:16:54.043 | 70.00th=[ 776], 80.00th=[ 827], 90.00th=[ 927], 95.00th=[ 3339], 00:16:54.043 | 99.00th=[ 3608], 99.50th=[ 3641], 99.90th=[ 3708], 99.95th=[ 3742], 00:16:54.043 | 99.99th=[ 3742] 00:16:54.043 bw ( KiB/s): min=30720, max=462848, per=5.14%, avg=195200.00, stdev=118091.59, samples=16 00:16:54.043 iops : min= 30, max= 452, avg=190.62, stdev=115.32, samples=16 00:16:54.043 lat (msec) : 50=0.06%, 250=11.37%, 500=17.66%, 750=35.81%, 1000=26.26% 00:16:54.043 lat (msec) : 2000=1.15%, >=2000=7.68% 00:16:54.043 cpu : usr=0.02%, sys=1.68%, ctx=1637, majf=0, minf=32769 00:16:54.043 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:16:54.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.043 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.043 issued rwts: total=1653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.043 job0: (groupid=0, jobs=1): err= 0: pid=778660: Tue Dec 10 04:04:47 2024 00:16:54.043 read: IOPS=115, BW=115MiB/s (121MB/s)(1158MiB/10063msec) 00:16:54.043 slat (usec): min=28, max=2198.2k, avg=8634.00, stdev=65208.32 00:16:54.043 clat (msec): min=60, max=3875, avg=1036.71, stdev=943.57 00:16:54.043 lat (msec): min=65, max=3876, avg=1045.34, stdev=946.86 00:16:54.043 clat percentiles (msec): 00:16:54.043 | 1.00th=[ 153], 5.00th=[ 388], 10.00th=[ 456], 20.00th=[ 489], 00:16:54.043 | 30.00th=[ 510], 40.00th=[ 550], 50.00th=[ 651], 60.00th=[ 760], 00:16:54.043 | 70.00th=[ 986], 80.00th=[ 1351], 90.00th=[ 3071], 95.00th=[ 3641], 00:16:54.043 | 99.00th=[ 3876], 99.50th=[ 3876], 99.90th=[ 3876], 99.95th=[ 3876], 00:16:54.043 | 99.99th=[ 3876] 00:16:54.043 bw ( KiB/s): min= 2048, max=278528, per=3.71%, avg=140765.87, stdev=93134.85, samples=15 00:16:54.043 iops : min= 2, max= 272, avg=137.47, stdev=90.95, samples=15 00:16:54.043 lat (msec) : 100=0.60%, 250=1.04%, 500=24.27%, 750=33.51%, 1000=10.88% 00:16:54.043 lat (msec) : 2000=18.74%, >=2000=10.97% 00:16:54.043 cpu : usr=0.03%, sys=1.67%, ctx=2151, majf=0, minf=32769 00:16:54.043 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.6% 00:16:54.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.043 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.043 issued rwts: total=1158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.043 job0: (groupid=0, jobs=1): err= 0: pid=778661: Tue Dec 10 04:04:47 2024 00:16:54.043 read: IOPS=121, BW=121MiB/s (127MB/s)(1310MiB/10788msec) 00:16:54.043 slat (usec): min=39, max=2039.2k, avg=8185.15, stdev=62049.72 00:16:54.043 clat (msec): min=59, max=4299, avg=991.93, stdev=648.33 00:16:54.043 lat (msec): min=553, max=5178, avg=1000.12, stdev=653.03 00:16:54.043 clat percentiles (msec): 00:16:54.043 | 1.00th=[ 558], 5.00th=[ 567], 10.00th=[ 584], 20.00th=[ 600], 00:16:54.043 | 30.00th=[ 625], 40.00th=[ 634], 50.00th=[ 667], 60.00th=[ 785], 00:16:54.043 | 70.00th=[ 852], 80.00th=[ 1183], 90.00th=[ 2056], 95.00th=[ 2467], 00:16:54.043 | 99.00th=[ 2769], 99.50th=[ 2769], 99.90th=[ 4279], 99.95th=[ 4329], 00:16:54.043 | 99.99th=[ 4329] 00:16:54.043 bw ( KiB/s): min= 4096, max=239616, per=3.75%, avg=142396.24, stdev=82722.84, samples=17 00:16:54.043 iops : min= 4, max= 234, avg=139.06, stdev=80.78, samples=17 00:16:54.043 lat (msec) : 100=0.08%, 750=57.02%, 1000=21.83%, 2000=7.71%, >=2000=13.36% 00:16:54.043 cpu : usr=0.06%, sys=1.42%, ctx=2068, majf=0, minf=32769 00:16:54.043 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:16:54.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.043 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.043 issued rwts: total=1310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.044 job0: (groupid=0, jobs=1): err= 0: pid=778662: Tue Dec 10 04:04:47 2024 00:16:54.044 read: IOPS=166, BW=167MiB/s (175MB/s)(1793MiB/10756msec) 00:16:54.044 slat (usec): min=47, max=1310.5k, avg=5962.12, stdev=31642.27 00:16:54.044 clat (msec): min=58, max=2321, avg=730.32, stdev=404.54 00:16:54.044 lat (msec): min=249, max=2322, avg=736.28, stdev=405.78 00:16:54.044 clat percentiles (msec): 00:16:54.044 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 313], 20.00th=[ 468], 00:16:54.044 | 30.00th=[ 502], 40.00th=[ 584], 50.00th=[ 651], 60.00th=[ 709], 00:16:54.044 | 70.00th=[ 802], 80.00th=[ 877], 90.00th=[ 1200], 95.00th=[ 1703], 00:16:54.044 | 99.00th=[ 2265], 99.50th=[ 2265], 99.90th=[ 2333], 99.95th=[ 2333], 00:16:54.044 | 99.99th=[ 2333] 00:16:54.044 bw ( KiB/s): min=49152, max=416958, per=4.99%, avg=189381.17, stdev=90218.84, samples=18 00:16:54.044 iops : min= 48, max= 407, avg=184.89, stdev=88.12, samples=18 00:16:54.044 lat (msec) : 100=0.06%, 250=1.12%, 500=27.55%, 750=36.25%, 1000=20.58% 00:16:54.044 lat (msec) : 2000=11.77%, >=2000=2.68% 00:16:54.044 cpu : usr=0.07%, sys=1.75%, ctx=2153, majf=0, minf=32769 00:16:54.044 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:54.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.044 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.044 issued rwts: total=1793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.044 job0: (groupid=0, jobs=1): err= 0: pid=778663: Tue Dec 10 04:04:47 2024 00:16:54.044 read: IOPS=107, BW=107MiB/s (112MB/s)(1153MiB/10762msec) 00:16:54.044 slat (usec): min=29, max=2101.7k, avg=9281.50, stdev=66964.82 00:16:54.044 clat (msec): min=56, max=3965, avg=1130.16, stdev=895.42 00:16:54.044 lat (msec): min=357, max=3970, avg=1139.45, stdev=898.81 00:16:54.044 clat percentiles (msec): 00:16:54.044 | 1.00th=[ 363], 5.00th=[ 384], 10.00th=[ 456], 20.00th=[ 592], 00:16:54.044 | 30.00th=[ 642], 40.00th=[ 751], 50.00th=[ 793], 60.00th=[ 835], 00:16:54.044 | 70.00th=[ 1028], 80.00th=[ 1502], 90.00th=[ 3004], 95.00th=[ 3440], 00:16:54.044 | 99.00th=[ 3876], 99.50th=[ 3910], 99.90th=[ 3977], 99.95th=[ 3977], 00:16:54.044 | 99.99th=[ 3977] 00:16:54.044 bw ( KiB/s): min= 2043, max=294323, per=3.45%, avg=131146.00, stdev=87023.61, samples=16 00:16:54.044 iops : min= 1, max= 287, avg=127.94, stdev=85.07, samples=16 00:16:54.044 lat (msec) : 100=0.09%, 500=13.70%, 750=25.33%, 1000=30.62%, 2000=18.99% 00:16:54.044 lat (msec) : >=2000=11.27% 00:16:54.044 cpu : usr=0.04%, sys=1.14%, ctx=1694, majf=0, minf=32769 00:16:54.044 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:16:54.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.044 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.044 issued rwts: total=1153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.044 job0: (groupid=0, jobs=1): err= 0: pid=778664: Tue Dec 10 04:04:47 2024 00:16:54.044 read: IOPS=24, BW=24.9MiB/s (26.1MB/s)(271MiB/10890msec) 00:16:54.044 slat (usec): min=431, max=2062.9k, avg=39977.39, stdev=244131.93 00:16:54.044 clat (msec): min=54, max=9659, avg=4838.91, stdev=3769.24 00:16:54.044 lat (msec): min=961, max=9660, avg=4878.89, stdev=3765.24 00:16:54.044 clat percentiles (msec): 00:16:54.044 | 1.00th=[ 961], 5.00th=[ 1011], 10.00th=[ 1036], 20.00th=[ 1083], 00:16:54.044 | 30.00th=[ 1150], 40.00th=[ 1183], 50.00th=[ 3473], 60.00th=[ 7550], 00:16:54.044 | 70.00th=[ 8926], 80.00th=[ 9194], 90.00th=[ 9463], 95.00th=[ 9597], 00:16:54.044 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:16:54.044 | 99.99th=[ 9597] 00:16:54.044 bw ( KiB/s): min= 4096, max=104448, per=1.10%, avg=41837.71, stdev=41919.13, samples=7 00:16:54.044 iops : min= 4, max= 102, avg=40.86, stdev=40.94, samples=7 00:16:54.044 lat (msec) : 100=0.37%, 1000=2.58%, 2000=42.80%, >=2000=54.24% 00:16:54.044 cpu : usr=0.00%, sys=1.03%, ctx=523, majf=0, minf=32769 00:16:54.044 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.8%, >=64=76.8% 00:16:54.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.044 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:16:54.044 issued rwts: total=271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.044 job0: (groupid=0, jobs=1): err= 0: pid=778665: Tue Dec 10 04:04:47 2024 00:16:54.044 read: IOPS=4, BW=4657KiB/s (4769kB/s)(49.0MiB/10774msec) 00:16:54.044 slat (usec): min=771, max=2062.8k, avg=218742.28, stdev=613492.15 00:16:54.044 clat (msec): min=54, max=10771, avg=7996.04, stdev=3075.70 00:16:54.044 lat (msec): min=2103, max=10773, avg=8214.78, stdev=2873.65 00:16:54.044 clat percentiles (msec): 00:16:54.044 | 1.00th=[ 55], 5.00th=[ 2123], 10.00th=[ 4245], 20.00th=[ 4279], 00:16:54.044 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[10402], 60.00th=[10537], 00:16:54.044 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:16:54.044 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:16:54.044 | 99.99th=[10805] 00:16:54.044 lat (msec) : 100=2.04%, >=2000=97.96% 00:16:54.044 cpu : usr=0.00%, sys=0.32%, ctx=87, majf=0, minf=12545 00:16:54.044 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:16:54.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.044 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:54.044 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.044 job1: (groupid=0, jobs=1): err= 0: pid=778666: Tue Dec 10 04:04:47 2024 00:16:54.044 read: IOPS=22, BW=22.2MiB/s (23.3MB/s)(238MiB/10730msec) 00:16:54.044 slat (usec): min=29, max=2078.8k, avg=44775.46, stdev=262783.40 00:16:54.044 clat (msec): min=71, max=5750, avg=3288.84, stdev=1623.31 00:16:54.044 lat (msec): min=1145, max=5789, avg=3333.62, stdev=1613.75 00:16:54.044 clat percentiles (msec): 00:16:54.044 | 1.00th=[ 1150], 5.00th=[ 1217], 10.00th=[ 1250], 20.00th=[ 1418], 00:16:54.044 | 30.00th=[ 1603], 40.00th=[ 2106], 50.00th=[ 4279], 60.00th=[ 4463], 00:16:54.044 | 70.00th=[ 4597], 80.00th=[ 4732], 90.00th=[ 5067], 95.00th=[ 5201], 00:16:54.044 | 99.00th=[ 5738], 99.50th=[ 5738], 99.90th=[ 5738], 99.95th=[ 5738], 00:16:54.044 | 99.99th=[ 5738] 00:16:54.044 bw ( KiB/s): min= 8192, max=155648, per=1.48%, avg=56320.00, stdev=68386.03, samples=4 00:16:54.044 iops : min= 8, max= 152, avg=55.00, stdev=66.78, samples=4 00:16:54.044 lat (msec) : 100=0.42%, 2000=39.50%, >=2000=60.08% 00:16:54.044 cpu : usr=0.00%, sys=0.77%, ctx=430, majf=0, minf=32769 00:16:54.044 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.4%, 16=6.7%, 32=13.4%, >=64=73.5% 00:16:54.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.044 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:16:54.044 issued rwts: total=238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.044 job1: (groupid=0, jobs=1): err= 0: pid=778667: Tue Dec 10 04:04:47 2024 00:16:54.044 read: IOPS=35, BW=35.4MiB/s (37.1MB/s)(382MiB/10795msec) 00:16:54.044 slat (usec): min=45, max=2048.8k, avg=28098.28, stdev=201166.62 00:16:54.044 clat (msec): min=60, max=5570, avg=2548.61, stdev=1985.28 00:16:54.044 lat (msec): min=526, max=5574, avg=2576.71, stdev=1986.02 00:16:54.044 clat percentiles (msec): 00:16:54.044 | 1.00th=[ 527], 5.00th=[ 531], 10.00th=[ 550], 20.00th=[ 651], 00:16:54.044 | 30.00th=[ 760], 40.00th=[ 885], 50.00th=[ 2366], 60.00th=[ 2433], 00:16:54.044 | 70.00th=[ 4732], 80.00th=[ 5000], 90.00th=[ 5269], 95.00th=[ 5403], 00:16:54.044 | 99.00th=[ 5537], 99.50th=[ 5537], 99.90th=[ 5604], 99.95th=[ 5604], 00:16:54.044 | 99.99th=[ 5604] 00:16:54.044 bw ( KiB/s): min= 4096, max=182272, per=2.28%, avg=86698.67, stdev=84139.32, samples=6 00:16:54.044 iops : min= 4, max= 178, avg=84.67, stdev=82.17, samples=6 00:16:54.044 lat (msec) : 100=0.26%, 750=26.96%, 1000=19.63%, 2000=1.57%, >=2000=51.57% 00:16:54.044 cpu : usr=0.02%, sys=0.74%, ctx=461, majf=0, minf=32769 00:16:54.044 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.5% 00:16:54.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.044 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:54.044 issued rwts: total=382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.044 job1: (groupid=0, jobs=1): err= 0: pid=778668: Tue Dec 10 04:04:47 2024 00:16:54.044 read: IOPS=40, BW=40.6MiB/s (42.6MB/s)(411MiB/10111msec) 00:16:54.044 slat (usec): min=41, max=2078.3k, avg=24383.94, stdev=164648.87 00:16:54.044 clat (msec): min=86, max=7721, avg=2449.24, stdev=1991.33 00:16:54.044 lat (msec): min=112, max=7726, avg=2473.62, stdev=2007.99 00:16:54.044 clat percentiles (msec): 00:16:54.044 | 1.00th=[ 169], 5.00th=[ 405], 10.00th=[ 693], 20.00th=[ 1099], 00:16:54.044 | 30.00th=[ 1284], 40.00th=[ 1385], 50.00th=[ 1469], 60.00th=[ 1620], 00:16:54.044 | 70.00th=[ 1787], 80.00th=[ 4866], 90.00th=[ 5201], 95.00th=[ 6745], 00:16:54.044 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 7752], 99.95th=[ 7752], 00:16:54.044 | 99.99th=[ 7752] 00:16:54.044 bw ( KiB/s): min=61440, max=104657, per=2.18%, avg=82899.57, stdev=15430.99, samples=7 00:16:54.044 iops : min= 60, max= 102, avg=80.86, stdev=15.00, samples=7 00:16:54.044 lat (msec) : 100=0.24%, 250=1.95%, 500=4.62%, 750=4.38%, 1000=6.81% 00:16:54.044 lat (msec) : 2000=52.07%, >=2000=29.93% 00:16:54.044 cpu : usr=0.01%, sys=1.23%, ctx=734, majf=0, minf=32769 00:16:54.044 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.8%, >=64=84.7% 00:16:54.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.044 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:54.044 issued rwts: total=411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.044 job1: (groupid=0, jobs=1): err= 0: pid=778669: Tue Dec 10 04:04:47 2024 00:16:54.044 read: IOPS=44, BW=44.4MiB/s (46.6MB/s)(447MiB/10060msec) 00:16:54.044 slat (usec): min=101, max=2158.0k, avg=22370.46, stdev=174647.64 00:16:54.044 clat (msec): min=58, max=7459, avg=1127.52, stdev=1288.86 00:16:54.044 lat (msec): min=60, max=7479, avg=1149.89, stdev=1330.83 00:16:54.044 clat percentiles (msec): 00:16:54.044 | 1.00th=[ 64], 5.00th=[ 127], 10.00th=[ 236], 20.00th=[ 414], 00:16:54.044 | 30.00th=[ 485], 40.00th=[ 527], 50.00th=[ 768], 60.00th=[ 936], 00:16:54.044 | 70.00th=[ 1620], 80.00th=[ 1687], 90.00th=[ 1754], 95.00th=[ 1804], 00:16:54.044 | 99.00th=[ 7416], 99.50th=[ 7416], 99.90th=[ 7483], 99.95th=[ 7483], 00:16:54.044 | 99.99th=[ 7483] 00:16:54.044 bw ( KiB/s): min=22528, max=258504, per=3.42%, avg=129934.40, stdev=92438.99, samples=5 00:16:54.044 iops : min= 22, max= 252, avg=126.80, stdev=90.12, samples=5 00:16:54.044 lat (msec) : 100=2.24%, 250=9.62%, 500=23.04%, 750=14.54%, 1000=15.66% 00:16:54.044 lat (msec) : 2000=29.98%, >=2000=4.92% 00:16:54.045 cpu : usr=0.01%, sys=0.84%, ctx=707, majf=0, minf=32769 00:16:54.045 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.9% 00:16:54.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.045 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:54.045 issued rwts: total=447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.045 job1: (groupid=0, jobs=1): err= 0: pid=778670: Tue Dec 10 04:04:47 2024 00:16:54.045 read: IOPS=33, BW=33.4MiB/s (35.1MB/s)(336MiB/10048msec) 00:16:54.045 slat (usec): min=63, max=2144.7k, avg=29770.77, stdev=201861.19 00:16:54.045 clat (msec): min=43, max=6341, avg=2140.69, stdev=1843.35 00:16:54.045 lat (msec): min=53, max=6344, avg=2170.46, stdev=1864.22 00:16:54.045 clat percentiles (msec): 00:16:54.045 | 1.00th=[ 55], 5.00th=[ 136], 10.00th=[ 243], 20.00th=[ 443], 00:16:54.045 | 30.00th=[ 642], 40.00th=[ 902], 50.00th=[ 953], 60.00th=[ 3205], 00:16:54.045 | 70.00th=[ 4178], 80.00th=[ 4212], 90.00th=[ 4245], 95.00th=[ 4329], 00:16:54.045 | 99.00th=[ 6275], 99.50th=[ 6342], 99.90th=[ 6342], 99.95th=[ 6342], 00:16:54.045 | 99.99th=[ 6342] 00:16:54.045 bw ( KiB/s): min=12288, max=168300, per=2.25%, avg=85269.60, stdev=66264.64, samples=5 00:16:54.045 iops : min= 12, max= 164, avg=83.20, stdev=64.60, samples=5 00:16:54.045 lat (msec) : 50=0.30%, 100=3.57%, 250=6.85%, 500=11.90%, 750=12.20% 00:16:54.045 lat (msec) : 1000=20.83%, 2000=1.49%, >=2000=42.86% 00:16:54.045 cpu : usr=0.01%, sys=0.64%, ctx=528, majf=0, minf=32769 00:16:54.045 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.5%, >=64=81.2% 00:16:54.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.045 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:54.045 issued rwts: total=336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.045 job1: (groupid=0, jobs=1): err= 0: pid=778671: Tue Dec 10 04:04:47 2024 00:16:54.045 read: IOPS=73, BW=73.3MiB/s (76.9MB/s)(796MiB/10857msec) 00:16:54.045 slat (usec): min=40, max=2076.5k, avg=13545.01, stdev=121007.81 00:16:54.045 clat (msec): min=71, max=5297, avg=1203.28, stdev=1090.15 00:16:54.045 lat (msec): min=526, max=5301, avg=1216.82, stdev=1098.66 00:16:54.045 clat percentiles (msec): 00:16:54.045 | 1.00th=[ 531], 5.00th=[ 567], 10.00th=[ 600], 20.00th=[ 634], 00:16:54.045 | 30.00th=[ 642], 40.00th=[ 659], 50.00th=[ 693], 60.00th=[ 844], 00:16:54.045 | 70.00th=[ 969], 80.00th=[ 1972], 90.00th=[ 2400], 95.00th=[ 3138], 00:16:54.045 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:16:54.045 | 99.99th=[ 5269] 00:16:54.045 bw ( KiB/s): min=96256, max=229376, per=4.50%, avg=171008.00, stdev=45833.91, samples=8 00:16:54.045 iops : min= 94, max= 224, avg=167.00, stdev=44.76, samples=8 00:16:54.045 lat (msec) : 100=0.13%, 750=53.02%, 1000=22.36%, 2000=5.15%, >=2000=19.35% 00:16:54.045 cpu : usr=0.04%, sys=1.15%, ctx=867, majf=0, minf=32769 00:16:54.045 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:16:54.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.045 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.045 issued rwts: total=796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.045 job1: (groupid=0, jobs=1): err= 0: pid=778672: Tue Dec 10 04:04:47 2024 00:16:54.045 read: IOPS=93, BW=93.8MiB/s (98.3MB/s)(946MiB/10086msec) 00:16:54.045 slat (usec): min=42, max=2055.6k, avg=10574.34, stdev=94503.96 00:16:54.045 clat (msec): min=78, max=5282, avg=1013.64, stdev=1085.03 00:16:54.045 lat (msec): min=85, max=5291, avg=1024.21, stdev=1093.47 00:16:54.045 clat percentiles (msec): 00:16:54.045 | 1.00th=[ 218], 5.00th=[ 363], 10.00th=[ 376], 20.00th=[ 485], 00:16:54.045 | 30.00th=[ 651], 40.00th=[ 751], 50.00th=[ 802], 60.00th=[ 860], 00:16:54.045 | 70.00th=[ 902], 80.00th=[ 961], 90.00th=[ 1133], 95.00th=[ 5067], 00:16:54.045 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:16:54.045 | 99.99th=[ 5269] 00:16:54.045 bw ( KiB/s): min= 6144, max=335201, per=4.01%, avg=152421.91, stdev=78350.78, samples=11 00:16:54.045 iops : min= 6, max= 327, avg=148.82, stdev=76.43, samples=11 00:16:54.045 lat (msec) : 100=0.32%, 250=1.37%, 500=19.13%, 750=19.03%, 1000=44.08% 00:16:54.045 lat (msec) : 2000=9.51%, >=2000=6.55% 00:16:54.045 cpu : usr=0.04%, sys=1.57%, ctx=1367, majf=0, minf=32332 00:16:54.045 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.3% 00:16:54.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.045 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.045 issued rwts: total=946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.045 job1: (groupid=0, jobs=1): err= 0: pid=778673: Tue Dec 10 04:04:47 2024 00:16:54.045 read: IOPS=32, BW=32.7MiB/s (34.3MB/s)(352MiB/10767msec) 00:16:54.045 slat (usec): min=80, max=2088.6k, avg=30471.10, stdev=219169.34 00:16:54.045 clat (msec): min=38, max=9112, avg=3648.78, stdev=3543.35 00:16:54.045 lat (msec): min=714, max=9113, avg=3679.25, stdev=3546.72 00:16:54.045 clat percentiles (msec): 00:16:54.045 | 1.00th=[ 718], 5.00th=[ 726], 10.00th=[ 726], 20.00th=[ 735], 00:16:54.045 | 30.00th=[ 751], 40.00th=[ 768], 50.00th=[ 919], 60.00th=[ 2836], 00:16:54.045 | 70.00th=[ 6477], 80.00th=[ 8658], 90.00th=[ 8926], 95.00th=[ 9060], 00:16:54.045 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:16:54.045 | 99.99th=[ 9060] 00:16:54.045 bw ( KiB/s): min= 6144, max=174080, per=1.73%, avg=65536.00, stdev=69501.37, samples=7 00:16:54.045 iops : min= 6, max= 170, avg=64.00, stdev=67.87, samples=7 00:16:54.045 lat (msec) : 50=0.28%, 750=30.11%, 1000=22.44%, 2000=1.70%, >=2000=45.45% 00:16:54.045 cpu : usr=0.01%, sys=1.08%, ctx=362, majf=0, minf=32769 00:16:54.045 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.1%, >=64=82.1% 00:16:54.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.045 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:54.045 issued rwts: total=352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.045 job1: (groupid=0, jobs=1): err= 0: pid=778674: Tue Dec 10 04:04:47 2024 00:16:54.045 read: IOPS=25, BW=25.6MiB/s (26.9MB/s)(279MiB/10883msec) 00:16:54.045 slat (usec): min=83, max=2070.8k, avg=38783.82, stdev=228186.95 00:16:54.045 clat (msec): min=60, max=9946, avg=4786.95, stdev=2856.77 00:16:54.045 lat (msec): min=761, max=10007, avg=4825.73, stdev=2861.28 00:16:54.045 clat percentiles (msec): 00:16:54.045 | 1.00th=[ 760], 5.00th=[ 802], 10.00th=[ 1368], 20.00th=[ 1620], 00:16:54.045 | 30.00th=[ 2022], 40.00th=[ 3977], 50.00th=[ 4279], 60.00th=[ 6342], 00:16:54.045 | 70.00th=[ 6477], 80.00th=[ 8356], 90.00th=[ 8557], 95.00th=[ 8658], 00:16:54.045 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[10000], 99.95th=[10000], 00:16:54.045 | 99.99th=[10000] 00:16:54.045 bw ( KiB/s): min= 6144, max=83968, per=0.90%, avg=34350.33, stdev=27352.31, samples=9 00:16:54.045 iops : min= 6, max= 82, avg=33.44, stdev=26.66, samples=9 00:16:54.045 lat (msec) : 100=0.36%, 1000=8.96%, 2000=20.43%, >=2000=70.25% 00:16:54.045 cpu : usr=0.01%, sys=1.14%, ctx=368, majf=0, minf=32769 00:16:54.045 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.7%, 32=11.5%, >=64=77.4% 00:16:54.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.045 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:16:54.045 issued rwts: total=279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.045 job1: (groupid=0, jobs=1): err= 0: pid=778675: Tue Dec 10 04:04:47 2024 00:16:54.045 read: IOPS=16, BW=16.5MiB/s (17.3MB/s)(166MiB/10079msec) 00:16:54.045 slat (usec): min=402, max=2101.1k, avg=60250.72, stdev=310938.36 00:16:54.045 clat (msec): min=76, max=9637, avg=3300.84, stdev=3779.51 00:16:54.045 lat (msec): min=78, max=9658, avg=3361.09, stdev=3806.97 00:16:54.045 clat percentiles (msec): 00:16:54.045 | 1.00th=[ 79], 5.00th=[ 136], 10.00th=[ 218], 20.00th=[ 388], 00:16:54.045 | 30.00th=[ 802], 40.00th=[ 986], 50.00th=[ 1133], 60.00th=[ 1284], 00:16:54.045 | 70.00th=[ 5671], 80.00th=[ 9463], 90.00th=[ 9597], 95.00th=[ 9597], 00:16:54.045 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:16:54.045 | 99.99th=[ 9597] 00:16:54.045 bw ( KiB/s): min= 2048, max=77513, per=1.05%, avg=39780.50, stdev=53361.81, samples=2 00:16:54.045 iops : min= 2, max= 75, avg=38.50, stdev=51.62, samples=2 00:16:54.045 lat (msec) : 100=3.01%, 250=8.43%, 500=10.24%, 750=6.02%, 1000=14.46% 00:16:54.045 lat (msec) : 2000=24.70%, >=2000=33.13% 00:16:54.045 cpu : usr=0.02%, sys=0.75%, ctx=327, majf=0, minf=32769 00:16:54.045 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.6%, 32=19.3%, >=64=62.0% 00:16:54.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.045 complete : 0=0.0%, 4=97.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.5% 00:16:54.045 issued rwts: total=166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.045 job1: (groupid=0, jobs=1): err= 0: pid=778676: Tue Dec 10 04:04:47 2024 00:16:54.045 read: IOPS=14, BW=14.0MiB/s (14.7MB/s)(141MiB/10044msec) 00:16:54.045 slat (usec): min=138, max=2089.4k, avg=70978.45, stdev=325507.70 00:16:54.045 clat (msec): min=35, max=9918, avg=3566.90, stdev=4092.22 00:16:54.045 lat (msec): min=54, max=9936, avg=3637.88, stdev=4117.22 00:16:54.045 clat percentiles (msec): 00:16:54.045 | 1.00th=[ 55], 5.00th=[ 69], 10.00th=[ 109], 20.00th=[ 292], 00:16:54.045 | 30.00th=[ 542], 40.00th=[ 852], 50.00th=[ 1053], 60.00th=[ 1351], 00:16:54.045 | 70.00th=[ 5738], 80.00th=[ 9597], 90.00th=[ 9731], 95.00th=[ 9866], 00:16:54.045 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:16:54.045 | 99.99th=[ 9866] 00:16:54.045 bw ( KiB/s): min=28557, max=28557, per=0.75%, avg=28557.00, stdev= 0.00, samples=1 00:16:54.045 iops : min= 27, max= 27, avg=27.00, stdev= 0.00, samples=1 00:16:54.045 lat (msec) : 50=0.71%, 100=7.09%, 250=9.93%, 500=10.64%, 750=9.93% 00:16:54.045 lat (msec) : 1000=9.22%, 2000=16.31%, >=2000=36.17% 00:16:54.045 cpu : usr=0.00%, sys=0.93%, ctx=345, majf=0, minf=32769 00:16:54.045 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.7%, 16=11.3%, 32=22.7%, >=64=55.3% 00:16:54.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.045 complete : 0=0.0%, 4=93.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=6.7% 00:16:54.045 issued rwts: total=141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.045 job1: (groupid=0, jobs=1): err= 0: pid=778677: Tue Dec 10 04:04:47 2024 00:16:54.045 read: IOPS=4, BW=4361KiB/s (4465kB/s)(46.0MiB/10802msec) 00:16:54.046 slat (usec): min=577, max=2092.6k, avg=233983.33, stdev=639991.03 00:16:54.046 clat (msec): min=38, max=10799, avg=6789.18, stdev=3432.01 00:16:54.046 lat (msec): min=2033, max=10801, avg=7023.16, stdev=3326.82 00:16:54.046 clat percentiles (msec): 00:16:54.046 | 1.00th=[ 39], 5.00th=[ 2039], 10.00th=[ 2106], 20.00th=[ 2140], 00:16:54.046 | 30.00th=[ 4279], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8557], 00:16:54.046 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:16:54.046 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:16:54.046 | 99.99th=[10805] 00:16:54.046 lat (msec) : 50=2.17%, >=2000=97.83% 00:16:54.046 cpu : usr=0.00%, sys=0.32%, ctx=90, majf=0, minf=11777 00:16:54.046 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:16:54.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.046 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:54.046 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.046 job1: (groupid=0, jobs=1): err= 0: pid=778678: Tue Dec 10 04:04:47 2024 00:16:54.046 read: IOPS=78, BW=78.4MiB/s (82.3MB/s)(791MiB/10084msec) 00:16:54.046 slat (usec): min=38, max=2164.5k, avg=12674.63, stdev=107640.16 00:16:54.046 clat (msec): min=55, max=4878, avg=939.10, stdev=567.05 00:16:54.046 lat (msec): min=94, max=4888, avg=951.78, stdev=584.02 00:16:54.046 clat percentiles (msec): 00:16:54.046 | 1.00th=[ 125], 5.00th=[ 523], 10.00th=[ 531], 20.00th=[ 592], 00:16:54.046 | 30.00th=[ 634], 40.00th=[ 667], 50.00th=[ 835], 60.00th=[ 1028], 00:16:54.046 | 70.00th=[ 1116], 80.00th=[ 1217], 90.00th=[ 1368], 95.00th=[ 1536], 00:16:54.046 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:16:54.046 | 99.99th=[ 4866] 00:16:54.046 bw ( KiB/s): min=61440, max=217088, per=3.25%, avg=123405.27, stdev=58719.16, samples=11 00:16:54.046 iops : min= 60, max= 212, avg=120.45, stdev=57.40, samples=11 00:16:54.046 lat (msec) : 100=0.38%, 250=1.52%, 500=2.15%, 750=44.12%, 1000=8.60% 00:16:54.046 lat (msec) : 2000=41.72%, >=2000=1.52% 00:16:54.046 cpu : usr=0.02%, sys=1.09%, ctx=1088, majf=0, minf=32769 00:16:54.046 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.0% 00:16:54.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.046 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:54.046 issued rwts: total=791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.046 job2: (groupid=0, jobs=1): err= 0: pid=778679: Tue Dec 10 04:04:47 2024 00:16:54.046 read: IOPS=6, BW=6522KiB/s (6679kB/s)(69.0MiB/10833msec) 00:16:54.046 slat (usec): min=570, max=2068.7k, avg=155947.52, stdev=531597.17 00:16:54.046 clat (msec): min=71, max=10827, avg=7185.64, stdev=3108.57 00:16:54.046 lat (msec): min=2101, max=10832, avg=7341.59, stdev=3014.95 00:16:54.046 clat percentiles (msec): 00:16:54.046 | 1.00th=[ 72], 5.00th=[ 2106], 10.00th=[ 4144], 20.00th=[ 4245], 00:16:54.046 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8557], 00:16:54.046 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:16:54.046 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:16:54.046 | 99.99th=[10805] 00:16:54.046 lat (msec) : 100=1.45%, >=2000=98.55% 00:16:54.046 cpu : usr=0.00%, sys=0.54%, ctx=88, majf=0, minf=17665 00:16:54.046 IO depths : 1=1.4%, 2=2.9%, 4=5.8%, 8=11.6%, 16=23.2%, 32=46.4%, >=64=8.7% 00:16:54.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.046 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:54.046 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.046 job2: (groupid=0, jobs=1): err= 0: pid=778680: Tue Dec 10 04:04:47 2024 00:16:54.046 read: IOPS=35, BW=35.3MiB/s (37.0MB/s)(380MiB/10775msec) 00:16:54.046 slat (usec): min=59, max=2051.3k, avg=28163.04, stdev=195801.16 00:16:54.046 clat (msec): min=70, max=7122, avg=2913.66, stdev=1368.99 00:16:54.046 lat (msec): min=509, max=7127, avg=2941.83, stdev=1371.80 00:16:54.046 clat percentiles (msec): 00:16:54.046 | 1.00th=[ 510], 5.00th=[ 531], 10.00th=[ 902], 20.00th=[ 969], 00:16:54.046 | 30.00th=[ 2106], 40.00th=[ 3071], 50.00th=[ 3205], 60.00th=[ 3507], 00:16:54.046 | 70.00th=[ 3809], 80.00th=[ 4044], 90.00th=[ 4212], 95.00th=[ 4279], 00:16:54.046 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7148], 99.95th=[ 7148], 00:16:54.046 | 99.99th=[ 7148] 00:16:54.046 bw ( KiB/s): min=10240, max=133120, per=1.94%, avg=73728.00, stdev=51349.97, samples=7 00:16:54.046 iops : min= 10, max= 130, avg=72.00, stdev=50.15, samples=7 00:16:54.046 lat (msec) : 100=0.26%, 750=8.68%, 1000=11.84%, 2000=7.89%, >=2000=71.32% 00:16:54.046 cpu : usr=0.02%, sys=0.84%, ctx=524, majf=0, minf=32769 00:16:54.046 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.4% 00:16:54.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.046 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:54.046 issued rwts: total=380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.046 job2: (groupid=0, jobs=1): err= 0: pid=778681: Tue Dec 10 04:04:47 2024 00:16:54.046 read: IOPS=13, BW=13.2MiB/s (13.8MB/s)(143MiB/10831msec) 00:16:54.046 slat (usec): min=345, max=2130.9k, avg=75288.76, stdev=348663.95 00:16:54.046 clat (msec): min=63, max=10786, avg=4084.93, stdev=3289.75 00:16:54.046 lat (msec): min=1403, max=10787, avg=4160.22, stdev=3320.01 00:16:54.046 clat percentiles (msec): 00:16:54.046 | 1.00th=[ 1401], 5.00th=[ 1469], 10.00th=[ 1586], 20.00th=[ 1754], 00:16:54.046 | 30.00th=[ 1804], 40.00th=[ 1838], 50.00th=[ 1938], 60.00th=[ 2106], 00:16:54.046 | 70.00th=[ 6141], 80.00th=[ 6477], 90.00th=[ 9329], 95.00th=[10671], 00:16:54.046 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:16:54.046 | 99.99th=[10805] 00:16:54.046 bw ( KiB/s): min=30720, max=30720, per=0.81%, avg=30720.00, stdev= 0.00, samples=1 00:16:54.046 iops : min= 30, max= 30, avg=30.00, stdev= 0.00, samples=1 00:16:54.046 lat (msec) : 100=0.70%, 2000=53.15%, >=2000=46.15% 00:16:54.046 cpu : usr=0.00%, sys=0.79%, ctx=259, majf=0, minf=32769 00:16:54.046 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.6%, 16=11.2%, 32=22.4%, >=64=55.9% 00:16:54.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.046 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=5.9% 00:16:54.046 issued rwts: total=143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.046 job2: (groupid=0, jobs=1): err= 0: pid=778682: Tue Dec 10 04:04:47 2024 00:16:54.046 read: IOPS=17, BW=17.2MiB/s (18.0MB/s)(173MiB/10085msec) 00:16:54.046 slat (usec): min=357, max=2117.9k, avg=57899.94, stdev=293887.74 00:16:54.046 clat (msec): min=67, max=9627, avg=3393.52, stdev=3868.13 00:16:54.046 lat (msec): min=96, max=9636, avg=3451.42, stdev=3892.53 00:16:54.046 clat percentiles (msec): 00:16:54.046 | 1.00th=[ 96], 5.00th=[ 146], 10.00th=[ 271], 20.00th=[ 493], 00:16:54.046 | 30.00th=[ 735], 40.00th=[ 961], 50.00th=[ 1150], 60.00th=[ 1318], 00:16:54.046 | 70.00th=[ 5738], 80.00th=[ 9463], 90.00th=[ 9463], 95.00th=[ 9597], 00:16:54.046 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:16:54.046 | 99.99th=[ 9597] 00:16:54.046 bw ( KiB/s): min=12288, max=78769, per=1.20%, avg=45528.50, stdev=47009.17, samples=2 00:16:54.046 iops : min= 12, max= 76, avg=44.00, stdev=45.25, samples=2 00:16:54.046 lat (msec) : 100=1.16%, 250=6.94%, 500=12.14%, 750=10.40%, 1000=12.72% 00:16:54.046 lat (msec) : 2000=23.70%, >=2000=32.95% 00:16:54.046 cpu : usr=0.00%, sys=0.79%, ctx=401, majf=0, minf=32769 00:16:54.046 IO depths : 1=0.6%, 2=1.2%, 4=2.3%, 8=4.6%, 16=9.2%, 32=18.5%, >=64=63.6% 00:16:54.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.046 complete : 0=0.0%, 4=97.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.1% 00:16:54.046 issued rwts: total=173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.046 job2: (groupid=0, jobs=1): err= 0: pid=778683: Tue Dec 10 04:04:47 2024 00:16:54.046 read: IOPS=46, BW=46.4MiB/s (48.7MB/s)(499MiB/10748msec) 00:16:54.046 slat (usec): min=73, max=2127.5k, avg=21409.05, stdev=144713.72 00:16:54.046 clat (msec): min=61, max=5884, avg=1470.02, stdev=604.98 00:16:54.046 lat (msec): min=544, max=5896, avg=1491.43, stdev=632.59 00:16:54.046 clat percentiles (msec): 00:16:54.046 | 1.00th=[ 558], 5.00th=[ 617], 10.00th=[ 735], 20.00th=[ 936], 00:16:54.046 | 30.00th=[ 1234], 40.00th=[ 1401], 50.00th=[ 1469], 60.00th=[ 1502], 00:16:54.046 | 70.00th=[ 1569], 80.00th=[ 1838], 90.00th=[ 2299], 95.00th=[ 2400], 00:16:54.046 | 99.00th=[ 3708], 99.50th=[ 3775], 99.90th=[ 5873], 99.95th=[ 5873], 00:16:54.046 | 99.99th=[ 5873] 00:16:54.046 bw ( KiB/s): min=24576, max=227328, per=2.50%, avg=94953.00, stdev=59969.94, samples=8 00:16:54.046 iops : min= 24, max= 222, avg=92.62, stdev=58.57, samples=8 00:16:54.046 lat (msec) : 100=0.20%, 750=9.82%, 1000=13.63%, 2000=58.12%, >=2000=18.24% 00:16:54.046 cpu : usr=0.00%, sys=0.88%, ctx=903, majf=0, minf=32769 00:16:54.046 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.4% 00:16:54.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.046 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:54.046 issued rwts: total=499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.046 job2: (groupid=0, jobs=1): err= 0: pid=778684: Tue Dec 10 04:04:47 2024 00:16:54.046 read: IOPS=16, BW=16.4MiB/s (17.2MB/s)(164MiB/10024msec) 00:16:54.046 slat (usec): min=515, max=2109.3k, avg=60976.09, stdev=321368.93 00:16:54.046 clat (msec): min=22, max=9717, avg=1267.87, stdev=2054.84 00:16:54.046 lat (msec): min=23, max=9732, avg=1328.85, stdev=2158.33 00:16:54.046 clat percentiles (msec): 00:16:54.046 | 1.00th=[ 24], 5.00th=[ 63], 10.00th=[ 125], 20.00th=[ 251], 00:16:54.046 | 30.00th=[ 372], 40.00th=[ 527], 50.00th=[ 659], 60.00th=[ 827], 00:16:54.046 | 70.00th=[ 961], 80.00th=[ 1183], 90.00th=[ 3406], 95.00th=[ 5537], 00:16:54.046 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:16:54.046 | 99.99th=[ 9731] 00:16:54.046 bw ( KiB/s): min=75776, max=75776, per=2.00%, avg=75776.00, stdev= 0.00, samples=1 00:16:54.046 iops : min= 74, max= 74, avg=74.00, stdev= 0.00, samples=1 00:16:54.046 lat (msec) : 50=1.83%, 100=6.71%, 250=10.98%, 500=19.51%, 750=16.46% 00:16:54.046 lat (msec) : 1000=15.24%, 2000=18.29%, >=2000=10.98% 00:16:54.046 cpu : usr=0.01%, sys=0.75%, ctx=353, majf=0, minf=32769 00:16:54.046 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.9%, 16=9.8%, 32=19.5%, >=64=61.6% 00:16:54.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.047 complete : 0=0.0%, 4=97.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.6% 00:16:54.047 issued rwts: total=164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.047 job2: (groupid=0, jobs=1): err= 0: pid=778685: Tue Dec 10 04:04:47 2024 00:16:54.047 read: IOPS=38, BW=38.4MiB/s (40.3MB/s)(414MiB/10771msec) 00:16:54.047 slat (usec): min=360, max=2072.5k, avg=25860.25, stdev=151584.83 00:16:54.047 clat (msec): min=62, max=5716, avg=1876.83, stdev=829.08 00:16:54.047 lat (msec): min=1127, max=5724, avg=1902.69, stdev=845.10 00:16:54.047 clat percentiles (msec): 00:16:54.047 | 1.00th=[ 1200], 5.00th=[ 1284], 10.00th=[ 1301], 20.00th=[ 1368], 00:16:54.047 | 30.00th=[ 1418], 40.00th=[ 1452], 50.00th=[ 1502], 60.00th=[ 1653], 00:16:54.047 | 70.00th=[ 1921], 80.00th=[ 2400], 90.00th=[ 3004], 95.00th=[ 3171], 00:16:54.047 | 99.00th=[ 5671], 99.50th=[ 5738], 99.90th=[ 5738], 99.95th=[ 5738], 00:16:54.047 | 99.99th=[ 5738] 00:16:54.047 bw ( KiB/s): min=10019, max=106496, per=1.71%, avg=65056.33, stdev=30250.02, samples=9 00:16:54.047 iops : min= 9, max= 104, avg=63.44, stdev=29.72, samples=9 00:16:54.047 lat (msec) : 100=0.24%, 2000=72.22%, >=2000=27.54% 00:16:54.047 cpu : usr=0.03%, sys=0.85%, ctx=1018, majf=0, minf=32769 00:16:54.047 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.7%, >=64=84.8% 00:16:54.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.047 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:54.047 issued rwts: total=414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.047 job2: (groupid=0, jobs=1): err= 0: pid=778686: Tue Dec 10 04:04:47 2024 00:16:54.047 read: IOPS=65, BW=65.9MiB/s (69.1MB/s)(662MiB/10049msec) 00:16:54.047 slat (usec): min=29, max=2092.7k, avg=15107.13, stdev=115064.16 00:16:54.047 clat (msec): min=45, max=4878, avg=1792.84, stdev=1151.58 00:16:54.047 lat (msec): min=94, max=4879, avg=1807.95, stdev=1153.98 00:16:54.047 clat percentiles (msec): 00:16:54.047 | 1.00th=[ 95], 5.00th=[ 351], 10.00th=[ 493], 20.00th=[ 768], 00:16:54.047 | 30.00th=[ 877], 40.00th=[ 1234], 50.00th=[ 1385], 60.00th=[ 1519], 00:16:54.047 | 70.00th=[ 2970], 80.00th=[ 3037], 90.00th=[ 3239], 95.00th=[ 3574], 00:16:54.047 | 99.00th=[ 4799], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:16:54.047 | 99.99th=[ 4866] 00:16:54.047 bw ( KiB/s): min=16384, max=194171, per=2.40%, avg=91026.58, stdev=55383.18, samples=12 00:16:54.047 iops : min= 16, max= 189, avg=88.83, stdev=53.97, samples=12 00:16:54.047 lat (msec) : 50=0.15%, 100=2.27%, 250=1.36%, 500=6.65%, 750=8.31% 00:16:54.047 lat (msec) : 1000=17.52%, 2000=25.38%, >=2000=38.37% 00:16:54.047 cpu : usr=0.02%, sys=1.07%, ctx=943, majf=0, minf=32769 00:16:54.047 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:16:54.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.047 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:54.047 issued rwts: total=662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.047 job2: (groupid=0, jobs=1): err= 0: pid=778687: Tue Dec 10 04:04:47 2024 00:16:54.047 read: IOPS=84, BW=84.2MiB/s (88.2MB/s)(849MiB/10089msec) 00:16:54.047 slat (usec): min=30, max=2128.7k, avg=11776.08, stdev=103665.21 00:16:54.047 clat (msec): min=87, max=5180, avg=1421.16, stdev=1127.92 00:16:54.047 lat (msec): min=93, max=7228, avg=1432.94, stdev=1140.67 00:16:54.047 clat percentiles (msec): 00:16:54.047 | 1.00th=[ 144], 5.00th=[ 363], 10.00th=[ 527], 20.00th=[ 558], 00:16:54.047 | 30.00th=[ 592], 40.00th=[ 600], 50.00th=[ 793], 60.00th=[ 1200], 00:16:54.047 | 70.00th=[ 1452], 80.00th=[ 2937], 90.00th=[ 3037], 95.00th=[ 3171], 00:16:54.047 | 99.00th=[ 3373], 99.50th=[ 5134], 99.90th=[ 5201], 99.95th=[ 5201], 00:16:54.047 | 99.99th=[ 5201] 00:16:54.047 bw ( KiB/s): min=43008, max=237568, per=3.54%, avg=134215.36, stdev=70665.03, samples=11 00:16:54.047 iops : min= 42, max= 232, avg=131.00, stdev=68.96, samples=11 00:16:54.047 lat (msec) : 100=0.24%, 250=2.59%, 500=5.30%, 750=39.93%, 1000=8.48% 00:16:54.047 lat (msec) : 2000=13.55%, >=2000=29.92% 00:16:54.047 cpu : usr=0.02%, sys=1.21%, ctx=969, majf=0, minf=32769 00:16:54.047 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:16:54.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.047 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.047 issued rwts: total=849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.047 job2: (groupid=0, jobs=1): err= 0: pid=778688: Tue Dec 10 04:04:47 2024 00:16:54.047 read: IOPS=20, BW=20.1MiB/s (21.1MB/s)(202MiB/10055msec) 00:16:54.047 slat (usec): min=348, max=2098.9k, avg=49518.25, stdev=286369.48 00:16:54.047 clat (msec): min=51, max=9443, avg=1418.68, stdev=2236.08 00:16:54.047 lat (msec): min=54, max=9460, avg=1468.19, stdev=2306.10 00:16:54.047 clat percentiles (msec): 00:16:54.047 | 1.00th=[ 60], 5.00th=[ 118], 10.00th=[ 180], 20.00th=[ 347], 00:16:54.047 | 30.00th=[ 481], 40.00th=[ 642], 50.00th=[ 768], 60.00th=[ 885], 00:16:54.047 | 70.00th=[ 911], 80.00th=[ 995], 90.00th=[ 3138], 95.00th=[ 7349], 00:16:54.047 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:16:54.047 | 99.99th=[ 9463] 00:16:54.047 bw ( KiB/s): min=12288, max=137730, per=1.98%, avg=75009.00, stdev=88700.89, samples=2 00:16:54.047 iops : min= 12, max= 134, avg=73.00, stdev=86.27, samples=2 00:16:54.047 lat (msec) : 100=2.97%, 250=11.88%, 500=15.35%, 750=17.33%, 1000=33.66% 00:16:54.047 lat (msec) : 2000=4.95%, >=2000=13.86% 00:16:54.047 cpu : usr=0.00%, sys=0.69%, ctx=366, majf=0, minf=32769 00:16:54.047 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=7.9%, 32=15.8%, >=64=68.8% 00:16:54.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.047 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:16:54.047 issued rwts: total=202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.047 job2: (groupid=0, jobs=1): err= 0: pid=778689: Tue Dec 10 04:04:47 2024 00:16:54.047 read: IOPS=70, BW=70.8MiB/s (74.2MB/s)(712MiB/10056msec) 00:16:54.047 slat (usec): min=49, max=2044.6k, avg=14048.79, stdev=77412.84 00:16:54.047 clat (msec): min=49, max=3982, avg=1604.16, stdev=924.84 00:16:54.047 lat (msec): min=69, max=3994, avg=1618.21, stdev=927.04 00:16:54.047 clat percentiles (msec): 00:16:54.047 | 1.00th=[ 128], 5.00th=[ 506], 10.00th=[ 676], 20.00th=[ 961], 00:16:54.047 | 30.00th=[ 1200], 40.00th=[ 1284], 50.00th=[ 1385], 60.00th=[ 1502], 00:16:54.047 | 70.00th=[ 1586], 80.00th=[ 1670], 90.00th=[ 3306], 95.00th=[ 3708], 00:16:54.047 | 99.00th=[ 3910], 99.50th=[ 3910], 99.90th=[ 3977], 99.95th=[ 3977], 00:16:54.047 | 99.99th=[ 3977] 00:16:54.047 bw ( KiB/s): min= 2048, max=225280, per=2.25%, avg=85447.14, stdev=51823.60, samples=14 00:16:54.047 iops : min= 2, max= 220, avg=83.36, stdev=50.63, samples=14 00:16:54.047 lat (msec) : 50=0.14%, 100=0.28%, 250=1.40%, 500=2.95%, 750=9.55% 00:16:54.047 lat (msec) : 1000=7.02%, 2000=60.81%, >=2000=17.84% 00:16:54.047 cpu : usr=0.02%, sys=1.14%, ctx=1446, majf=0, minf=32769 00:16:54.047 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:16:54.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.047 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:54.047 issued rwts: total=712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.047 job2: (groupid=0, jobs=1): err= 0: pid=778690: Tue Dec 10 04:04:47 2024 00:16:54.047 read: IOPS=48, BW=48.5MiB/s (50.8MB/s)(487MiB/10044msec) 00:16:54.047 slat (usec): min=46, max=2098.4k, avg=20540.76, stdev=134306.98 00:16:54.047 clat (msec): min=38, max=6246, avg=1212.92, stdev=618.90 00:16:54.047 lat (msec): min=47, max=6299, avg=1233.46, stdev=661.98 00:16:54.047 clat percentiles (msec): 00:16:54.047 | 1.00th=[ 80], 5.00th=[ 159], 10.00th=[ 284], 20.00th=[ 584], 00:16:54.047 | 30.00th=[ 885], 40.00th=[ 1401], 50.00th=[ 1452], 60.00th=[ 1502], 00:16:54.047 | 70.00th=[ 1536], 80.00th=[ 1670], 90.00th=[ 1754], 95.00th=[ 1838], 00:16:54.047 | 99.00th=[ 2039], 99.50th=[ 4144], 99.90th=[ 6275], 99.95th=[ 6275], 00:16:54.047 | 99.99th=[ 6275] 00:16:54.047 bw ( KiB/s): min=40960, max=180110, per=2.42%, avg=91889.75, stdev=44305.53, samples=8 00:16:54.047 iops : min= 40, max= 175, avg=89.62, stdev=43.01, samples=8 00:16:54.047 lat (msec) : 50=0.62%, 100=2.26%, 250=5.13%, 500=8.62%, 750=7.39% 00:16:54.047 lat (msec) : 1000=11.09%, 2000=63.66%, >=2000=1.23% 00:16:54.047 cpu : usr=0.03%, sys=0.97%, ctx=1127, majf=0, minf=32769 00:16:54.047 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.6%, >=64=87.1% 00:16:54.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.047 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:54.047 issued rwts: total=487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.048 job2: (groupid=0, jobs=1): err= 0: pid=778691: Tue Dec 10 04:04:47 2024 00:16:54.048 read: IOPS=90, BW=90.6MiB/s (95.1MB/s)(912MiB/10061msec) 00:16:54.048 slat (usec): min=45, max=2153.7k, avg=10963.75, stdev=98630.78 00:16:54.048 clat (msec): min=58, max=5254, avg=1338.07, stdev=1576.86 00:16:54.048 lat (msec): min=68, max=5260, avg=1349.04, stdev=1585.51 00:16:54.048 clat percentiles (msec): 00:16:54.048 | 1.00th=[ 136], 5.00th=[ 338], 10.00th=[ 355], 20.00th=[ 359], 00:16:54.048 | 30.00th=[ 518], 40.00th=[ 651], 50.00th=[ 810], 60.00th=[ 919], 00:16:54.048 | 70.00th=[ 1011], 80.00th=[ 1167], 90.00th=[ 5201], 95.00th=[ 5201], 00:16:54.048 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:16:54.048 | 99.99th=[ 5269] 00:16:54.048 bw ( KiB/s): min=16384, max=344064, per=3.53%, avg=133973.33, stdev=91261.78, samples=12 00:16:54.048 iops : min= 16, max= 336, avg=130.83, stdev=89.12, samples=12 00:16:54.048 lat (msec) : 100=0.66%, 250=1.21%, 500=27.41%, 750=17.87%, 1000=21.93% 00:16:54.048 lat (msec) : 2000=15.79%, >=2000=15.13% 00:16:54.048 cpu : usr=0.06%, sys=1.36%, ctx=1274, majf=0, minf=32769 00:16:54.048 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:16:54.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.048 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.048 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.048 job3: (groupid=0, jobs=1): err= 0: pid=778692: Tue Dec 10 04:04:47 2024 00:16:54.048 read: IOPS=39, BW=39.9MiB/s (41.9MB/s)(433MiB/10847msec) 00:16:54.048 slat (usec): min=121, max=2072.9k, avg=24985.33, stdev=185575.18 00:16:54.048 clat (msec): min=27, max=5078, avg=3099.31, stdev=1859.83 00:16:54.048 lat (msec): min=553, max=5083, avg=3124.29, stdev=1853.87 00:16:54.048 clat percentiles (msec): 00:16:54.048 | 1.00th=[ 558], 5.00th=[ 584], 10.00th=[ 726], 20.00th=[ 818], 00:16:54.048 | 30.00th=[ 1070], 40.00th=[ 2123], 50.00th=[ 4396], 60.00th=[ 4530], 00:16:54.048 | 70.00th=[ 4665], 80.00th=[ 4799], 90.00th=[ 5000], 95.00th=[ 5067], 00:16:54.048 | 99.00th=[ 5067], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:16:54.048 | 99.99th=[ 5067] 00:16:54.048 bw ( KiB/s): min= 8192, max=221184, per=1.83%, avg=69404.44, stdev=73788.02, samples=9 00:16:54.048 iops : min= 8, max= 216, avg=67.78, stdev=72.06, samples=9 00:16:54.048 lat (msec) : 50=0.23%, 750=16.63%, 1000=10.62%, 2000=11.09%, >=2000=61.43% 00:16:54.048 cpu : usr=0.00%, sys=1.25%, ctx=986, majf=0, minf=32770 00:16:54.048 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.4%, >=64=85.5% 00:16:54.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.048 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:54.048 issued rwts: total=433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.048 job3: (groupid=0, jobs=1): err= 0: pid=778693: Tue Dec 10 04:04:47 2024 00:16:54.048 read: IOPS=178, BW=178MiB/s (187MB/s)(1910MiB/10716msec) 00:16:54.048 slat (usec): min=40, max=1986.6k, avg=5595.26, stdev=46487.69 00:16:54.048 clat (msec): min=23, max=2483, avg=656.38, stdev=515.19 00:16:54.048 lat (msec): min=248, max=2485, avg=661.97, stdev=517.09 00:16:54.048 clat percentiles (msec): 00:16:54.048 | 1.00th=[ 255], 5.00th=[ 264], 10.00th=[ 275], 20.00th=[ 359], 00:16:54.048 | 30.00th=[ 372], 40.00th=[ 426], 50.00th=[ 481], 60.00th=[ 575], 00:16:54.048 | 70.00th=[ 609], 80.00th=[ 684], 90.00th=[ 1401], 95.00th=[ 2198], 00:16:54.048 | 99.00th=[ 2433], 99.50th=[ 2433], 99.90th=[ 2467], 99.95th=[ 2500], 00:16:54.048 | 99.99th=[ 2500] 00:16:54.048 bw ( KiB/s): min=63488, max=464896, per=6.41%, avg=243291.47, stdev=120019.77, samples=15 00:16:54.048 iops : min= 62, max= 454, avg=237.53, stdev=117.29, samples=15 00:16:54.048 lat (msec) : 50=0.05%, 250=0.42%, 500=51.36%, 750=30.94%, 1000=2.83% 00:16:54.048 lat (msec) : 2000=7.75%, >=2000=6.65% 00:16:54.048 cpu : usr=0.04%, sys=1.47%, ctx=2081, majf=0, minf=32769 00:16:54.048 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:16:54.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.048 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.048 issued rwts: total=1910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.048 job3: (groupid=0, jobs=1): err= 0: pid=778694: Tue Dec 10 04:04:47 2024 00:16:54.048 read: IOPS=101, BW=102MiB/s (107MB/s)(1023MiB/10060msec) 00:16:54.048 slat (usec): min=29, max=2000.2k, avg=9771.63, stdev=63858.34 00:16:54.048 clat (msec): min=59, max=3900, avg=1169.29, stdev=1011.01 00:16:54.048 lat (msec): min=62, max=3903, avg=1179.06, stdev=1015.74 00:16:54.048 clat percentiles (msec): 00:16:54.048 | 1.00th=[ 103], 5.00th=[ 305], 10.00th=[ 493], 20.00th=[ 510], 00:16:54.048 | 30.00th=[ 667], 40.00th=[ 735], 50.00th=[ 793], 60.00th=[ 844], 00:16:54.048 | 70.00th=[ 1036], 80.00th=[ 1502], 90.00th=[ 3339], 95.00th=[ 3775], 00:16:54.048 | 99.00th=[ 3842], 99.50th=[ 3876], 99.90th=[ 3910], 99.95th=[ 3910], 00:16:54.048 | 99.99th=[ 3910] 00:16:54.048 bw ( KiB/s): min=22573, max=251904, per=3.22%, avg=122415.67, stdev=70743.54, samples=15 00:16:54.048 iops : min= 22, max= 246, avg=119.40, stdev=69.03, samples=15 00:16:54.048 lat (msec) : 100=0.88%, 250=3.23%, 500=9.87%, 750=28.84%, 1000=25.71% 00:16:54.048 lat (msec) : 2000=18.67%, >=2000=12.81% 00:16:54.048 cpu : usr=0.05%, sys=1.62%, ctx=1777, majf=0, minf=32769 00:16:54.048 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:16:54.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.048 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.048 issued rwts: total=1023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.048 job3: (groupid=0, jobs=1): err= 0: pid=778695: Tue Dec 10 04:04:47 2024 00:16:54.048 read: IOPS=104, BW=104MiB/s (109MB/s)(1045MiB/10047msec) 00:16:54.048 slat (usec): min=66, max=2054.0k, avg=9565.87, stdev=64527.12 00:16:54.048 clat (msec): min=45, max=3553, avg=1180.88, stdev=876.80 00:16:54.048 lat (msec): min=76, max=3558, avg=1190.45, stdev=880.48 00:16:54.048 clat percentiles (msec): 00:16:54.048 | 1.00th=[ 126], 5.00th=[ 409], 10.00th=[ 464], 20.00th=[ 481], 00:16:54.048 | 30.00th=[ 489], 40.00th=[ 506], 50.00th=[ 902], 60.00th=[ 1401], 00:16:54.048 | 70.00th=[ 1485], 80.00th=[ 1670], 90.00th=[ 2836], 95.00th=[ 3205], 00:16:54.048 | 99.00th=[ 3507], 99.50th=[ 3540], 99.90th=[ 3540], 99.95th=[ 3540], 00:16:54.048 | 99.99th=[ 3540] 00:16:54.048 bw ( KiB/s): min=22528, max=266240, per=3.10%, avg=117504.00, stdev=70794.38, samples=16 00:16:54.048 iops : min= 22, max= 260, avg=114.75, stdev=69.14, samples=16 00:16:54.048 lat (msec) : 50=0.10%, 100=0.67%, 250=2.20%, 500=33.21%, 750=12.82% 00:16:54.048 lat (msec) : 1000=1.63%, 2000=37.22%, >=2000=12.15% 00:16:54.048 cpu : usr=0.01%, sys=1.93%, ctx=1538, majf=0, minf=32769 00:16:54.048 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=94.0% 00:16:54.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.048 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.048 issued rwts: total=1045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.048 job3: (groupid=0, jobs=1): err= 0: pid=778696: Tue Dec 10 04:04:47 2024 00:16:54.048 read: IOPS=29, BW=29.6MiB/s (31.0MB/s)(319MiB/10783msec) 00:16:54.048 slat (usec): min=438, max=2097.4k, avg=33725.07, stdev=227363.04 00:16:54.048 clat (msec): min=23, max=5441, avg=2556.66, stdev=2056.78 00:16:54.048 lat (msec): min=402, max=5458, avg=2590.39, stdev=2058.13 00:16:54.048 clat percentiles (msec): 00:16:54.048 | 1.00th=[ 401], 5.00th=[ 409], 10.00th=[ 439], 20.00th=[ 558], 00:16:54.048 | 30.00th=[ 726], 40.00th=[ 894], 50.00th=[ 1045], 60.00th=[ 4396], 00:16:54.048 | 70.00th=[ 4799], 80.00th=[ 4933], 90.00th=[ 5000], 95.00th=[ 5067], 00:16:54.048 | 99.00th=[ 5403], 99.50th=[ 5470], 99.90th=[ 5470], 99.95th=[ 5470], 00:16:54.048 | 99.99th=[ 5470] 00:16:54.048 bw ( KiB/s): min= 4096, max=303104, per=2.06%, avg=78233.60, stdev=127802.42, samples=5 00:16:54.048 iops : min= 4, max= 296, avg=76.40, stdev=124.81, samples=5 00:16:54.048 lat (msec) : 50=0.31%, 500=13.17%, 750=17.24%, 1000=15.99%, 2000=6.90% 00:16:54.048 lat (msec) : >=2000=46.39% 00:16:54.048 cpu : usr=0.00%, sys=0.73%, ctx=1022, majf=0, minf=32769 00:16:54.048 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.0%, 32=10.0%, >=64=80.3% 00:16:54.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.048 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:54.048 issued rwts: total=319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.048 job3: (groupid=0, jobs=1): err= 0: pid=778697: Tue Dec 10 04:04:47 2024 00:16:54.048 read: IOPS=1, BW=1904KiB/s (1950kB/s)(20.0MiB/10755msec) 00:16:54.048 slat (msec): min=9, max=2113, avg=534.07, stdev=907.56 00:16:54.048 clat (msec): min=72, max=10739, avg=6526.27, stdev=3431.30 00:16:54.048 lat (msec): min=2100, max=10754, avg=7060.34, stdev=3197.21 00:16:54.048 clat percentiles (msec): 00:16:54.048 | 1.00th=[ 73], 5.00th=[ 73], 10.00th=[ 2106], 20.00th=[ 2140], 00:16:54.048 | 30.00th=[ 4245], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 6477], 00:16:54.048 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:16:54.048 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:16:54.048 | 99.99th=[10805] 00:16:54.048 lat (msec) : 100=5.00%, >=2000=95.00% 00:16:54.048 cpu : usr=0.00%, sys=0.13%, ctx=81, majf=0, minf=5121 00:16:54.048 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:16:54.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.048 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:54.048 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.048 job3: (groupid=0, jobs=1): err= 0: pid=778698: Tue Dec 10 04:04:47 2024 00:16:54.048 read: IOPS=37, BW=37.5MiB/s (39.3MB/s)(402MiB/10732msec) 00:16:54.048 slat (usec): min=770, max=2042.0k, avg=26626.36, stdev=196175.72 00:16:54.048 clat (msec): min=26, max=5329, avg=2038.83, stdev=1819.71 00:16:54.048 lat (msec): min=410, max=5372, avg=2065.46, stdev=1826.96 00:16:54.048 clat percentiles (msec): 00:16:54.048 | 1.00th=[ 418], 5.00th=[ 430], 10.00th=[ 439], 20.00th=[ 451], 00:16:54.048 | 30.00th=[ 502], 40.00th=[ 651], 50.00th=[ 919], 60.00th=[ 1234], 00:16:54.048 | 70.00th=[ 4245], 80.00th=[ 4463], 90.00th=[ 4665], 95.00th=[ 4732], 00:16:54.048 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5336], 99.95th=[ 5336], 00:16:54.048 | 99.99th=[ 5336] 00:16:54.048 bw ( KiB/s): min= 4096, max=292864, per=2.96%, avg=112223.00, stdev=128561.78, samples=5 00:16:54.048 iops : min= 4, max= 286, avg=109.40, stdev=125.73, samples=5 00:16:54.048 lat (msec) : 50=0.25%, 500=29.10%, 750=13.43%, 1000=10.20%, 2000=7.21% 00:16:54.048 lat (msec) : >=2000=39.80% 00:16:54.049 cpu : usr=0.00%, sys=0.77%, ctx=1145, majf=0, minf=32769 00:16:54.049 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.3% 00:16:54.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.049 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:54.049 issued rwts: total=402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.049 job3: (groupid=0, jobs=1): err= 0: pid=778699: Tue Dec 10 04:04:47 2024 00:16:54.049 read: IOPS=113, BW=113MiB/s (119MB/s)(1233MiB/10875msec) 00:16:54.049 slat (usec): min=41, max=2164.1k, avg=8800.78, stdev=106049.07 00:16:54.049 clat (msec): min=20, max=4835, avg=754.78, stdev=1038.42 00:16:54.049 lat (msec): min=235, max=4836, avg=763.59, stdev=1046.39 00:16:54.049 clat percentiles (msec): 00:16:54.049 | 1.00th=[ 241], 5.00th=[ 253], 10.00th=[ 262], 20.00th=[ 330], 00:16:54.049 | 30.00th=[ 380], 40.00th=[ 384], 50.00th=[ 388], 60.00th=[ 405], 00:16:54.049 | 70.00th=[ 414], 80.00th=[ 439], 90.00th=[ 2970], 95.00th=[ 3171], 00:16:54.049 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4866], 99.95th=[ 4866], 00:16:54.049 | 99.99th=[ 4866] 00:16:54.049 bw ( KiB/s): min= 2048, max=524288, per=6.62%, avg=251447.89, stdev=173205.32, samples=9 00:16:54.049 iops : min= 2, max= 512, avg=245.44, stdev=169.32, samples=9 00:16:54.049 lat (msec) : 50=0.08%, 250=2.92%, 500=84.10%, 2000=0.08%, >=2000=12.81% 00:16:54.049 cpu : usr=0.02%, sys=1.22%, ctx=1547, majf=0, minf=32769 00:16:54.049 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=94.9% 00:16:54.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.049 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.049 issued rwts: total=1233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.049 job3: (groupid=0, jobs=1): err= 0: pid=778700: Tue Dec 10 04:04:47 2024 00:16:54.049 read: IOPS=32, BW=32.8MiB/s (34.4MB/s)(356MiB/10864msec) 00:16:54.049 slat (usec): min=71, max=2090.3k, avg=30454.40, stdev=201386.17 00:16:54.049 clat (msec): min=20, max=5509, avg=2588.21, stdev=1550.44 00:16:54.049 lat (msec): min=1103, max=5513, avg=2618.66, stdev=1550.45 00:16:54.049 clat percentiles (msec): 00:16:54.049 | 1.00th=[ 1099], 5.00th=[ 1116], 10.00th=[ 1150], 20.00th=[ 1167], 00:16:54.049 | 30.00th=[ 1200], 40.00th=[ 1234], 50.00th=[ 2123], 60.00th=[ 2601], 00:16:54.049 | 70.00th=[ 3842], 80.00th=[ 4212], 90.00th=[ 4530], 95.00th=[ 5470], 00:16:54.049 | 99.00th=[ 5537], 99.50th=[ 5537], 99.90th=[ 5537], 99.95th=[ 5537], 00:16:54.049 | 99.99th=[ 5537] 00:16:54.049 bw ( KiB/s): min=34816, max=124928, per=2.05%, avg=77824.00, stdev=34367.37, samples=6 00:16:54.049 iops : min= 34, max= 122, avg=76.00, stdev=33.56, samples=6 00:16:54.049 lat (msec) : 50=0.28%, 2000=47.19%, >=2000=52.53% 00:16:54.049 cpu : usr=0.00%, sys=0.97%, ctx=691, majf=0, minf=32769 00:16:54.049 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.5%, 32=9.0%, >=64=82.3% 00:16:54.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.049 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:54.049 issued rwts: total=356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.049 job3: (groupid=0, jobs=1): err= 0: pid=778701: Tue Dec 10 04:04:47 2024 00:16:54.049 read: IOPS=7, BW=7992KiB/s (8184kB/s)(85.0MiB/10891msec) 00:16:54.049 slat (usec): min=777, max=2071.6k, avg=127279.01, stdev=481373.63 00:16:54.049 clat (msec): min=71, max=10888, avg=8985.91, stdev=2866.70 00:16:54.049 lat (msec): min=2095, max=10890, avg=9113.19, stdev=2701.61 00:16:54.049 clat percentiles (msec): 00:16:54.049 | 1.00th=[ 71], 5.00th=[ 4144], 10.00th=[ 4279], 20.00th=[ 6409], 00:16:54.049 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10805], 60.00th=[10805], 00:16:54.049 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10939], 95.00th=[10939], 00:16:54.049 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:16:54.049 | 99.99th=[10939] 00:16:54.049 lat (msec) : 100=1.18%, >=2000=98.82% 00:16:54.049 cpu : usr=0.00%, sys=0.81%, ctx=124, majf=0, minf=21761 00:16:54.049 IO depths : 1=1.2%, 2=2.4%, 4=4.7%, 8=9.4%, 16=18.8%, 32=37.6%, >=64=25.9% 00:16:54.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.049 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:54.049 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.049 job3: (groupid=0, jobs=1): err= 0: pid=778702: Tue Dec 10 04:04:47 2024 00:16:54.049 read: IOPS=58, BW=58.1MiB/s (60.9MB/s)(582MiB/10025msec) 00:16:54.049 slat (usec): min=499, max=2053.2k, avg=17186.39, stdev=119881.13 00:16:54.049 clat (msec): min=20, max=5527, avg=1346.26, stdev=991.77 00:16:54.049 lat (msec): min=32, max=5533, avg=1363.45, stdev=1006.47 00:16:54.049 clat percentiles (msec): 00:16:54.049 | 1.00th=[ 37], 5.00th=[ 112], 10.00th=[ 292], 20.00th=[ 978], 00:16:54.049 | 30.00th=[ 1167], 40.00th=[ 1318], 50.00th=[ 1351], 60.00th=[ 1401], 00:16:54.049 | 70.00th=[ 1435], 80.00th=[ 1469], 90.00th=[ 1536], 95.00th=[ 1586], 00:16:54.049 | 99.00th=[ 5537], 99.50th=[ 5537], 99.90th=[ 5537], 99.95th=[ 5537], 00:16:54.049 | 99.99th=[ 5537] 00:16:54.049 bw ( KiB/s): min=18432, max=172032, per=2.45%, avg=93184.00, stdev=42864.18, samples=10 00:16:54.049 iops : min= 18, max= 168, avg=91.00, stdev=41.86, samples=10 00:16:54.049 lat (msec) : 50=2.41%, 100=1.03%, 250=5.67%, 500=5.33%, 750=3.44% 00:16:54.049 lat (msec) : 1000=2.58%, 2000=74.74%, >=2000=4.81% 00:16:54.049 cpu : usr=0.02%, sys=0.96%, ctx=1071, majf=0, minf=32769 00:16:54.049 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:16:54.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.049 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:54.049 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.049 job3: (groupid=0, jobs=1): err= 0: pid=778703: Tue Dec 10 04:04:47 2024 00:16:54.049 read: IOPS=25, BW=25.1MiB/s (26.3MB/s)(274MiB/10904msec) 00:16:54.049 slat (usec): min=461, max=2075.9k, avg=39523.11, stdev=234123.55 00:16:54.049 clat (msec): min=72, max=9690, avg=4898.22, stdev=3565.94 00:16:54.049 lat (msec): min=978, max=9696, avg=4937.74, stdev=3562.28 00:16:54.049 clat percentiles (msec): 00:16:54.049 | 1.00th=[ 1011], 5.00th=[ 1099], 10.00th=[ 1250], 20.00th=[ 1469], 00:16:54.049 | 30.00th=[ 1536], 40.00th=[ 1737], 50.00th=[ 3440], 60.00th=[ 7617], 00:16:54.049 | 70.00th=[ 8557], 80.00th=[ 9060], 90.00th=[ 9329], 95.00th=[ 9463], 00:16:54.049 | 99.00th=[ 9597], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:16:54.049 | 99.99th=[ 9731] 00:16:54.049 bw ( KiB/s): min= 2048, max=88064, per=0.88%, avg=33223.11, stdev=33648.61, samples=9 00:16:54.049 iops : min= 2, max= 86, avg=32.44, stdev=32.86, samples=9 00:16:54.049 lat (msec) : 100=0.36%, 1000=0.36%, 2000=46.35%, >=2000=52.92% 00:16:54.049 cpu : usr=0.04%, sys=1.17%, ctx=513, majf=0, minf=32769 00:16:54.049 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.8%, 32=11.7%, >=64=77.0% 00:16:54.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.049 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:16:54.049 issued rwts: total=274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.049 job3: (groupid=0, jobs=1): err= 0: pid=778704: Tue Dec 10 04:04:47 2024 00:16:54.049 read: IOPS=113, BW=113MiB/s (119MB/s)(1224MiB/10814msec) 00:16:54.049 slat (usec): min=50, max=2019.0k, avg=8167.73, stdev=59195.14 00:16:54.049 clat (msec): min=447, max=4518, avg=1071.95, stdev=917.25 00:16:54.049 lat (msec): min=448, max=4520, avg=1080.12, stdev=921.92 00:16:54.049 clat percentiles (msec): 00:16:54.049 | 1.00th=[ 489], 5.00th=[ 502], 10.00th=[ 510], 20.00th=[ 542], 00:16:54.049 | 30.00th=[ 584], 40.00th=[ 659], 50.00th=[ 726], 60.00th=[ 810], 00:16:54.049 | 70.00th=[ 944], 80.00th=[ 1234], 90.00th=[ 3071], 95.00th=[ 3574], 00:16:54.049 | 99.00th=[ 4245], 99.50th=[ 4245], 99.90th=[ 4463], 99.95th=[ 4530], 00:16:54.049 | 99.99th=[ 4530] 00:16:54.049 bw ( KiB/s): min=22528, max=265708, per=3.70%, avg=140382.75, stdev=82175.74, samples=16 00:16:54.049 iops : min= 22, max= 259, avg=137.06, stdev=80.20, samples=16 00:16:54.049 lat (msec) : 500=4.49%, 750=47.63%, 1000=21.65%, 2000=15.77%, >=2000=10.46% 00:16:54.049 cpu : usr=0.08%, sys=1.81%, ctx=1811, majf=0, minf=32769 00:16:54.049 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.9% 00:16:54.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.049 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.049 issued rwts: total=1224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.049 job4: (groupid=0, jobs=1): err= 0: pid=778705: Tue Dec 10 04:04:47 2024 00:16:54.049 read: IOPS=17, BW=17.7MiB/s (18.6MB/s)(179MiB/10107msec) 00:16:54.049 slat (usec): min=479, max=2088.0k, avg=55927.62, stdev=302066.92 00:16:54.049 clat (msec): min=94, max=9605, avg=2986.14, stdev=3343.03 00:16:54.049 lat (msec): min=107, max=9614, avg=3042.06, stdev=3375.26 00:16:54.049 clat percentiles (msec): 00:16:54.049 | 1.00th=[ 108], 5.00th=[ 194], 10.00th=[ 300], 20.00th=[ 439], 00:16:54.049 | 30.00th=[ 667], 40.00th=[ 877], 50.00th=[ 1099], 60.00th=[ 1318], 00:16:54.049 | 70.00th=[ 5403], 80.00th=[ 5537], 90.00th=[ 9597], 95.00th=[ 9597], 00:16:54.049 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:16:54.049 | 99.99th=[ 9597] 00:16:54.049 bw ( KiB/s): min= 6144, max=98304, per=1.38%, avg=52224.00, stdev=65166.96, samples=2 00:16:54.049 iops : min= 6, max= 96, avg=51.00, stdev=63.64, samples=2 00:16:54.049 lat (msec) : 100=0.56%, 250=7.82%, 500=13.41%, 750=12.29%, 1000=11.73% 00:16:54.049 lat (msec) : 2000=18.44%, >=2000=35.75% 00:16:54.049 cpu : usr=0.01%, sys=0.99%, ctx=269, majf=0, minf=32769 00:16:54.049 IO depths : 1=0.6%, 2=1.1%, 4=2.2%, 8=4.5%, 16=8.9%, 32=17.9%, >=64=64.8% 00:16:54.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.049 complete : 0=0.0%, 4=98.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.9% 00:16:54.049 issued rwts: total=179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.049 job4: (groupid=0, jobs=1): err= 0: pid=778706: Tue Dec 10 04:04:47 2024 00:16:54.049 read: IOPS=23, BW=23.0MiB/s (24.2MB/s)(247MiB/10717msec) 00:16:54.049 slat (usec): min=560, max=2057.2k, avg=43197.11, stdev=249934.61 00:16:54.049 clat (msec): min=45, max=9646, avg=5215.57, stdev=3723.57 00:16:54.049 lat (msec): min=1107, max=9655, avg=5258.77, stdev=3715.65 00:16:54.049 clat percentiles (msec): 00:16:54.049 | 1.00th=[ 1099], 5.00th=[ 1133], 10.00th=[ 1200], 20.00th=[ 1250], 00:16:54.049 | 30.00th=[ 1368], 40.00th=[ 1502], 50.00th=[ 6409], 60.00th=[ 8423], 00:16:54.049 | 70.00th=[ 8792], 80.00th=[ 9060], 90.00th=[ 9329], 95.00th=[ 9463], 00:16:54.049 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:16:54.049 | 99.99th=[ 9597] 00:16:54.050 bw ( KiB/s): min= 4096, max=89932, per=1.07%, avg=40588.67, stdev=37210.44, samples=6 00:16:54.050 iops : min= 4, max= 87, avg=39.50, stdev=36.12, samples=6 00:16:54.050 lat (msec) : 50=0.40%, 2000=43.32%, >=2000=56.28% 00:16:54.050 cpu : usr=0.03%, sys=0.91%, ctx=630, majf=0, minf=32769 00:16:54.050 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.5%, 32=13.0%, >=64=74.5% 00:16:54.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.050 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:16:54.050 issued rwts: total=247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.050 job4: (groupid=0, jobs=1): err= 0: pid=778707: Tue Dec 10 04:04:47 2024 00:16:54.050 read: IOPS=11, BW=11.8MiB/s (12.4MB/s)(127MiB/10734msec) 00:16:54.050 slat (usec): min=388, max=2085.6k, avg=84160.67, stdev=377468.95 00:16:54.050 clat (msec): min=44, max=10730, avg=6233.44, stdev=2016.15 00:16:54.050 lat (msec): min=2001, max=10733, avg=6317.60, stdev=1978.51 00:16:54.050 clat percentiles (msec): 00:16:54.050 | 1.00th=[ 2005], 5.00th=[ 2056], 10.00th=[ 4279], 20.00th=[ 5805], 00:16:54.050 | 30.00th=[ 5873], 40.00th=[ 5940], 50.00th=[ 6007], 60.00th=[ 6141], 00:16:54.050 | 70.00th=[ 6208], 80.00th=[ 6275], 90.00th=[10671], 95.00th=[10671], 00:16:54.050 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:54.050 | 99.99th=[10671] 00:16:54.050 lat (msec) : 50=0.79%, >=2000=99.21% 00:16:54.050 cpu : usr=0.03%, sys=0.64%, ctx=157, majf=0, minf=32513 00:16:54.050 IO depths : 1=0.8%, 2=1.6%, 4=3.1%, 8=6.3%, 16=12.6%, 32=25.2%, >=64=50.4% 00:16:54.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.050 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:54.050 issued rwts: total=127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.050 job4: (groupid=0, jobs=1): err= 0: pid=778708: Tue Dec 10 04:04:47 2024 00:16:54.050 read: IOPS=43, BW=43.3MiB/s (45.4MB/s)(434MiB/10031msec) 00:16:54.050 slat (usec): min=50, max=2053.5k, avg=23055.45, stdev=168655.00 00:16:54.050 clat (msec): min=22, max=7658, avg=967.13, stdev=858.87 00:16:54.050 lat (msec): min=63, max=7792, avg=990.19, stdev=920.47 00:16:54.050 clat percentiles (msec): 00:16:54.050 | 1.00th=[ 80], 5.00th=[ 161], 10.00th=[ 247], 20.00th=[ 477], 00:16:54.050 | 30.00th=[ 634], 40.00th=[ 634], 50.00th=[ 693], 60.00th=[ 978], 00:16:54.050 | 70.00th=[ 1284], 80.00th=[ 1435], 90.00th=[ 1502], 95.00th=[ 1536], 00:16:54.050 | 99.00th=[ 5671], 99.50th=[ 5738], 99.90th=[ 7684], 99.95th=[ 7684], 00:16:54.050 | 99.99th=[ 7684] 00:16:54.050 bw ( KiB/s): min=36864, max=194560, per=3.31%, avg=125747.20, stdev=67484.63, samples=5 00:16:54.050 iops : min= 36, max= 190, avg=122.80, stdev=65.90, samples=5 00:16:54.050 lat (msec) : 50=0.23%, 100=3.46%, 250=6.91%, 500=11.29%, 750=30.88% 00:16:54.050 lat (msec) : 1000=7.83%, 2000=36.87%, >=2000=2.53% 00:16:54.050 cpu : usr=0.01%, sys=0.79%, ctx=689, majf=0, minf=32769 00:16:54.050 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.4%, >=64=85.5% 00:16:54.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.050 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:54.050 issued rwts: total=434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.050 job4: (groupid=0, jobs=1): err= 0: pid=778709: Tue Dec 10 04:04:47 2024 00:16:54.050 read: IOPS=26, BW=26.5MiB/s (27.8MB/s)(286MiB/10782msec) 00:16:54.050 slat (usec): min=50, max=2148.9k, avg=37487.47, stdev=242092.72 00:16:54.050 clat (msec): min=58, max=9105, avg=4489.20, stdev=3630.99 00:16:54.050 lat (msec): min=615, max=9107, avg=4526.69, stdev=3626.78 00:16:54.050 clat percentiles (msec): 00:16:54.050 | 1.00th=[ 609], 5.00th=[ 726], 10.00th=[ 793], 20.00th=[ 1003], 00:16:54.050 | 30.00th=[ 1368], 40.00th=[ 1401], 50.00th=[ 2165], 60.00th=[ 6477], 00:16:54.050 | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[ 9060], 95.00th=[ 9060], 00:16:54.050 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:16:54.050 | 99.99th=[ 9060] 00:16:54.050 bw ( KiB/s): min= 6144, max=200704, per=1.22%, avg=46226.29, stdev=71424.59, samples=7 00:16:54.050 iops : min= 6, max= 196, avg=45.14, stdev=69.75, samples=7 00:16:54.050 lat (msec) : 100=0.35%, 750=6.99%, 1000=11.89%, 2000=28.67%, >=2000=52.10% 00:16:54.050 cpu : usr=0.00%, sys=0.85%, ctx=499, majf=0, minf=32769 00:16:54.050 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.2%, >=64=78.0% 00:16:54.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.050 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:16:54.050 issued rwts: total=286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.050 job4: (groupid=0, jobs=1): err= 0: pid=778710: Tue Dec 10 04:04:47 2024 00:16:54.050 read: IOPS=24, BW=24.4MiB/s (25.6MB/s)(245MiB/10030msec) 00:16:54.050 slat (usec): min=69, max=2080.6k, avg=40813.17, stdev=260116.96 00:16:54.050 clat (msec): min=29, max=9620, avg=1705.18, stdev=2773.37 00:16:54.050 lat (msec): min=30, max=9625, avg=1745.99, stdev=2817.66 00:16:54.050 clat percentiles (msec): 00:16:54.050 | 1.00th=[ 32], 5.00th=[ 87], 10.00th=[ 126], 20.00th=[ 188], 00:16:54.050 | 30.00th=[ 275], 40.00th=[ 359], 50.00th=[ 456], 60.00th=[ 518], 00:16:54.050 | 70.00th=[ 827], 80.00th=[ 1167], 90.00th=[ 5470], 95.00th=[ 9597], 00:16:54.050 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:16:54.050 | 99.99th=[ 9597] 00:16:54.050 bw ( KiB/s): min=241664, max=241664, per=6.37%, avg=241664.00, stdev= 0.00, samples=1 00:16:54.050 iops : min= 236, max= 236, avg=236.00, stdev= 0.00, samples=1 00:16:54.050 lat (msec) : 50=1.63%, 100=6.12%, 250=18.78%, 500=31.84%, 750=9.80% 00:16:54.050 lat (msec) : 1000=6.94%, 2000=4.90%, >=2000=20.00% 00:16:54.050 cpu : usr=0.00%, sys=0.93%, ctx=299, majf=0, minf=32769 00:16:54.050 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.3%, 16=6.5%, 32=13.1%, >=64=74.3% 00:16:54.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.050 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:16:54.050 issued rwts: total=245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.050 job4: (groupid=0, jobs=1): err= 0: pid=778711: Tue Dec 10 04:04:47 2024 00:16:54.050 read: IOPS=13, BW=13.3MiB/s (14.0MB/s)(145MiB/10884msec) 00:16:54.050 slat (usec): min=426, max=4144.7k, avg=74652.15, stdev=428618.79 00:16:54.050 clat (msec): min=58, max=10849, avg=5254.42, stdev=3864.20 00:16:54.050 lat (msec): min=1265, max=10851, avg=5329.07, stdev=3867.61 00:16:54.050 clat percentiles (msec): 00:16:54.050 | 1.00th=[ 1267], 5.00th=[ 1334], 10.00th=[ 1435], 20.00th=[ 1670], 00:16:54.050 | 30.00th=[ 1854], 40.00th=[ 2022], 50.00th=[ 4212], 60.00th=[ 6409], 00:16:54.050 | 70.00th=[ 9463], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:16:54.050 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:16:54.050 | 99.99th=[10805] 00:16:54.050 bw ( KiB/s): min=16384, max=18432, per=0.46%, avg=17408.00, stdev=1448.15, samples=2 00:16:54.050 iops : min= 16, max= 18, avg=17.00, stdev= 1.41, samples=2 00:16:54.050 lat (msec) : 100=0.69%, 2000=37.93%, >=2000=61.38% 00:16:54.050 cpu : usr=0.00%, sys=0.96%, ctx=224, majf=0, minf=32769 00:16:54.050 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.5%, 16=11.0%, 32=22.1%, >=64=56.6% 00:16:54.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.050 complete : 0=0.0%, 4=94.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=5.3% 00:16:54.050 issued rwts: total=145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.050 job4: (groupid=0, jobs=1): err= 0: pid=778712: Tue Dec 10 04:04:47 2024 00:16:54.050 read: IOPS=47, BW=47.2MiB/s (49.5MB/s)(513MiB/10866msec) 00:16:54.050 slat (usec): min=98, max=2046.6k, avg=21092.39, stdev=165943.25 00:16:54.050 clat (msec): min=42, max=8452, avg=2602.79, stdev=3063.39 00:16:54.050 lat (msec): min=696, max=8455, avg=2623.88, stdev=3069.97 00:16:54.050 clat percentiles (msec): 00:16:54.050 | 1.00th=[ 701], 5.00th=[ 709], 10.00th=[ 726], 20.00th=[ 743], 00:16:54.050 | 30.00th=[ 760], 40.00th=[ 776], 50.00th=[ 827], 60.00th=[ 869], 00:16:54.050 | 70.00th=[ 953], 80.00th=[ 7953], 90.00th=[ 8154], 95.00th=[ 8288], 00:16:54.050 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:16:54.050 | 99.99th=[ 8423] 00:16:54.050 bw ( KiB/s): min= 4087, max=192512, per=2.31%, avg=87607.89, stdev=81977.30, samples=9 00:16:54.050 iops : min= 3, max= 188, avg=85.44, stdev=80.18, samples=9 00:16:54.050 lat (msec) : 50=0.19%, 750=25.73%, 1000=45.42%, 2000=0.19%, >=2000=28.46% 00:16:54.050 cpu : usr=0.03%, sys=1.33%, ctx=763, majf=0, minf=32769 00:16:54.050 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.7% 00:16:54.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.050 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:54.050 issued rwts: total=513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.050 job4: (groupid=0, jobs=1): err= 0: pid=778713: Tue Dec 10 04:04:47 2024 00:16:54.050 read: IOPS=47, BW=47.5MiB/s (49.8MB/s)(512MiB/10771msec) 00:16:54.050 slat (usec): min=43, max=2102.7k, avg=20884.83, stdev=152091.12 00:16:54.050 clat (msec): min=75, max=5662, avg=2510.47, stdev=1805.95 00:16:54.050 lat (msec): min=746, max=5668, avg=2531.35, stdev=1806.72 00:16:54.050 clat percentiles (msec): 00:16:54.050 | 1.00th=[ 751], 5.00th=[ 827], 10.00th=[ 844], 20.00th=[ 1045], 00:16:54.050 | 30.00th=[ 1183], 40.00th=[ 1200], 50.00th=[ 1888], 60.00th=[ 2400], 00:16:54.050 | 70.00th=[ 2869], 80.00th=[ 5470], 90.00th=[ 5537], 95.00th=[ 5604], 00:16:54.050 | 99.00th=[ 5671], 99.50th=[ 5671], 99.90th=[ 5671], 99.95th=[ 5671], 00:16:54.050 | 99.99th=[ 5671] 00:16:54.050 bw ( KiB/s): min=12288, max=122880, per=2.07%, avg=78643.20, stdev=43123.79, samples=10 00:16:54.050 iops : min= 12, max= 120, avg=76.80, stdev=42.11, samples=10 00:16:54.050 lat (msec) : 100=0.20%, 750=1.17%, 1000=15.04%, 2000=36.52%, >=2000=47.07% 00:16:54.050 cpu : usr=0.00%, sys=0.92%, ctx=788, majf=0, minf=32769 00:16:54.050 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.7% 00:16:54.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.050 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:54.050 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.050 job4: (groupid=0, jobs=1): err= 0: pid=778714: Tue Dec 10 04:04:47 2024 00:16:54.050 read: IOPS=31, BW=32.0MiB/s (33.5MB/s)(321MiB/10043msec) 00:16:54.050 slat (usec): min=38, max=2144.0k, avg=31158.42, stdev=211698.95 00:16:54.050 clat (msec): min=39, max=7689, avg=3026.70, stdev=2725.22 00:16:54.050 lat (msec): min=43, max=9832, avg=3057.86, stdev=2744.74 00:16:54.050 clat percentiles (msec): 00:16:54.050 | 1.00th=[ 75], 5.00th=[ 134], 10.00th=[ 321], 20.00th=[ 464], 00:16:54.050 | 30.00th=[ 642], 40.00th=[ 1053], 50.00th=[ 1485], 60.00th=[ 3641], 00:16:54.050 | 70.00th=[ 6074], 80.00th=[ 6342], 90.00th=[ 6678], 95.00th=[ 6946], 00:16:54.051 | 99.00th=[ 7080], 99.50th=[ 7684], 99.90th=[ 7684], 99.95th=[ 7684], 00:16:54.051 | 99.99th=[ 7684] 00:16:54.051 bw ( KiB/s): min= 8192, max=96256, per=1.74%, avg=66160.17, stdev=32248.05, samples=6 00:16:54.051 iops : min= 8, max= 94, avg=64.50, stdev=31.41, samples=6 00:16:54.051 lat (msec) : 50=0.93%, 100=1.56%, 250=5.61%, 500=19.63%, 750=4.67% 00:16:54.051 lat (msec) : 1000=6.54%, 2000=16.20%, >=2000=44.86% 00:16:54.051 cpu : usr=0.04%, sys=0.93%, ctx=396, majf=0, minf=32769 00:16:54.051 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=10.0%, >=64=80.4% 00:16:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.051 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:54.051 issued rwts: total=321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.051 job4: (groupid=0, jobs=1): err= 0: pid=778715: Tue Dec 10 04:04:47 2024 00:16:54.051 read: IOPS=22, BW=22.7MiB/s (23.8MB/s)(246MiB/10833msec) 00:16:54.051 slat (usec): min=103, max=2175.2k, avg=43792.67, stdev=261533.90 00:16:54.051 clat (msec): min=58, max=9775, avg=5274.68, stdev=3888.16 00:16:54.051 lat (msec): min=927, max=9781, avg=5318.47, stdev=3880.51 00:16:54.051 clat percentiles (msec): 00:16:54.051 | 1.00th=[ 927], 5.00th=[ 944], 10.00th=[ 961], 20.00th=[ 1083], 00:16:54.051 | 30.00th=[ 1250], 40.00th=[ 1318], 50.00th=[ 5604], 60.00th=[ 8792], 00:16:54.051 | 70.00th=[ 9060], 80.00th=[ 9463], 90.00th=[ 9597], 95.00th=[ 9731], 00:16:54.051 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:16:54.051 | 99.99th=[ 9731] 00:16:54.051 bw ( KiB/s): min= 8192, max=92160, per=1.06%, avg=40277.33, stdev=36628.10, samples=6 00:16:54.051 iops : min= 8, max= 90, avg=39.33, stdev=35.77, samples=6 00:16:54.051 lat (msec) : 100=0.41%, 1000=14.63%, 2000=26.83%, >=2000=58.13% 00:16:54.051 cpu : usr=0.01%, sys=1.09%, ctx=594, majf=0, minf=32769 00:16:54.051 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.3%, 16=6.5%, 32=13.0%, >=64=74.4% 00:16:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.051 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:16:54.051 issued rwts: total=246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.051 job4: (groupid=0, jobs=1): err= 0: pid=778716: Tue Dec 10 04:04:47 2024 00:16:54.051 read: IOPS=28, BW=29.0MiB/s (30.4MB/s)(313MiB/10803msec) 00:16:54.051 slat (usec): min=83, max=2112.9k, avg=34320.22, stdev=229307.96 00:16:54.051 clat (msec): min=59, max=9335, avg=4101.95, stdev=3891.26 00:16:54.051 lat (msec): min=582, max=9341, avg=4136.27, stdev=3891.96 00:16:54.051 clat percentiles (msec): 00:16:54.051 | 1.00th=[ 584], 5.00th=[ 609], 10.00th=[ 617], 20.00th=[ 642], 00:16:54.051 | 30.00th=[ 693], 40.00th=[ 869], 50.00th=[ 1183], 60.00th=[ 5067], 00:16:54.051 | 70.00th=[ 8792], 80.00th=[ 9060], 90.00th=[ 9194], 95.00th=[ 9329], 00:16:54.051 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:16:54.051 | 99.99th=[ 9329] 00:16:54.051 bw ( KiB/s): min= 4096, max=172032, per=1.43%, avg=54125.71, stdev=70250.25, samples=7 00:16:54.051 iops : min= 4, max= 168, avg=52.86, stdev=68.60, samples=7 00:16:54.051 lat (msec) : 100=0.32%, 750=33.87%, 1000=10.54%, 2000=9.58%, >=2000=45.69% 00:16:54.051 cpu : usr=0.00%, sys=0.83%, ctx=532, majf=0, minf=32769 00:16:54.051 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.1%, 32=10.2%, >=64=79.9% 00:16:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.051 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:54.051 issued rwts: total=313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.051 job4: (groupid=0, jobs=1): err= 0: pid=778717: Tue Dec 10 04:04:47 2024 00:16:54.051 read: IOPS=50, BW=50.6MiB/s (53.1MB/s)(544MiB/10744msec) 00:16:54.051 slat (usec): min=41, max=2080.3k, avg=19637.26, stdev=161125.49 00:16:54.051 clat (msec): min=58, max=8007, avg=2381.32, stdev=2972.05 00:16:54.051 lat (msec): min=248, max=8017, avg=2400.95, stdev=2977.71 00:16:54.051 clat percentiles (msec): 00:16:54.051 | 1.00th=[ 249], 5.00th=[ 251], 10.00th=[ 253], 20.00th=[ 284], 00:16:54.051 | 30.00th=[ 393], 40.00th=[ 642], 50.00th=[ 1099], 60.00th=[ 1301], 00:16:54.051 | 70.00th=[ 1418], 80.00th=[ 7819], 90.00th=[ 7886], 95.00th=[ 7953], 00:16:54.051 | 99.00th=[ 8020], 99.50th=[ 8020], 99.90th=[ 8020], 99.95th=[ 8020], 00:16:54.051 | 99.99th=[ 8020] 00:16:54.051 bw ( KiB/s): min= 8192, max=460800, per=3.21%, avg=121706.14, stdev=165552.69, samples=7 00:16:54.051 iops : min= 8, max= 450, avg=118.71, stdev=161.78, samples=7 00:16:54.051 lat (msec) : 100=0.18%, 250=3.49%, 500=29.96%, 750=9.74%, 1000=4.60% 00:16:54.051 lat (msec) : 2000=26.84%, >=2000=25.18% 00:16:54.051 cpu : usr=0.01%, sys=0.87%, ctx=690, majf=0, minf=32769 00:16:54.051 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.4% 00:16:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.051 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:54.051 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.051 job5: (groupid=0, jobs=1): err= 0: pid=778718: Tue Dec 10 04:04:47 2024 00:16:54.051 read: IOPS=2, BW=2293KiB/s (2348kB/s)(24.0MiB/10720msec) 00:16:54.051 slat (msec): min=2, max=2098, avg=444.02, stdev=839.61 00:16:54.051 clat (msec): min=63, max=10619, avg=5413.51, stdev=3451.08 00:16:54.051 lat (msec): min=2034, max=10719, avg=5857.53, stdev=3418.16 00:16:54.051 clat percentiles (msec): 00:16:54.051 | 1.00th=[ 64], 5.00th=[ 2039], 10.00th=[ 2039], 20.00th=[ 2056], 00:16:54.051 | 30.00th=[ 2165], 40.00th=[ 2165], 50.00th=[ 4279], 60.00th=[ 6477], 00:16:54.051 | 70.00th=[ 8557], 80.00th=[ 8557], 90.00th=[10671], 95.00th=[10671], 00:16:54.051 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:54.051 | 99.99th=[10671] 00:16:54.051 lat (msec) : 100=4.17%, >=2000=95.83% 00:16:54.051 cpu : usr=0.00%, sys=0.12%, ctx=87, majf=0, minf=6145 00:16:54.051 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:16:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.051 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:54.051 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.051 job5: (groupid=0, jobs=1): err= 0: pid=778719: Tue Dec 10 04:04:47 2024 00:16:54.051 read: IOPS=140, BW=141MiB/s (148MB/s)(1421MiB/10080msec) 00:16:54.051 slat (usec): min=28, max=2071.6k, avg=7034.95, stdev=77131.07 00:16:54.051 clat (msec): min=79, max=5737, avg=804.56, stdev=1400.36 00:16:54.051 lat (msec): min=82, max=5760, avg=811.60, stdev=1406.75 00:16:54.051 clat percentiles (msec): 00:16:54.051 | 1.00th=[ 125], 5.00th=[ 126], 10.00th=[ 127], 20.00th=[ 130], 00:16:54.051 | 30.00th=[ 211], 40.00th=[ 239], 50.00th=[ 253], 60.00th=[ 257], 00:16:54.051 | 70.00th=[ 271], 80.00th=[ 869], 90.00th=[ 2299], 95.00th=[ 5000], 00:16:54.051 | 99.00th=[ 5671], 99.50th=[ 5671], 99.90th=[ 5738], 99.95th=[ 5738], 00:16:54.051 | 99.99th=[ 5738] 00:16:54.051 bw ( KiB/s): min=32768, max=665600, per=7.76%, avg=294456.89, stdev=274494.61, samples=9 00:16:54.051 iops : min= 32, max= 650, avg=287.56, stdev=268.06, samples=9 00:16:54.051 lat (msec) : 100=0.84%, 250=43.77%, 500=31.39%, 750=2.46%, 1000=2.89% 00:16:54.051 lat (msec) : 2000=6.83%, >=2000=11.82% 00:16:54.051 cpu : usr=0.03%, sys=1.43%, ctx=1811, majf=0, minf=32769 00:16:54.051 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.6% 00:16:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.051 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.051 issued rwts: total=1421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.051 job5: (groupid=0, jobs=1): err= 0: pid=778720: Tue Dec 10 04:04:47 2024 00:16:54.051 read: IOPS=47, BW=47.2MiB/s (49.5MB/s)(507MiB/10732msec) 00:16:54.051 slat (usec): min=51, max=2029.9k, avg=21011.47, stdev=155846.80 00:16:54.051 clat (msec): min=76, max=5272, avg=2474.88, stdev=1700.41 00:16:54.051 lat (msec): min=473, max=5319, avg=2495.89, stdev=1699.05 00:16:54.051 clat percentiles (msec): 00:16:54.051 | 1.00th=[ 472], 5.00th=[ 481], 10.00th=[ 498], 20.00th=[ 617], 00:16:54.051 | 30.00th=[ 1318], 40.00th=[ 1418], 50.00th=[ 2165], 60.00th=[ 3171], 00:16:54.051 | 70.00th=[ 3440], 80.00th=[ 4665], 90.00th=[ 5067], 95.00th=[ 5201], 00:16:54.051 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:16:54.051 | 99.99th=[ 5269] 00:16:54.051 bw ( KiB/s): min=18432, max=264192, per=2.56%, avg=97024.00, stdev=88032.95, samples=8 00:16:54.051 iops : min= 18, max= 258, avg=94.75, stdev=85.97, samples=8 00:16:54.051 lat (msec) : 100=0.20%, 500=10.45%, 750=18.15%, 2000=18.93%, >=2000=52.27% 00:16:54.051 cpu : usr=0.01%, sys=0.81%, ctx=801, majf=0, minf=32769 00:16:54.051 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.3%, >=64=87.6% 00:16:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.051 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:54.051 issued rwts: total=507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.051 job5: (groupid=0, jobs=1): err= 0: pid=778721: Tue Dec 10 04:04:47 2024 00:16:54.051 read: IOPS=63, BW=63.8MiB/s (66.9MB/s)(693MiB/10854msec) 00:16:54.051 slat (usec): min=40, max=2148.2k, avg=15584.90, stdev=133907.92 00:16:54.051 clat (msec): min=49, max=5553, avg=1902.63, stdev=1775.41 00:16:54.051 lat (msec): min=259, max=5559, avg=1918.21, stdev=1779.80 00:16:54.051 clat percentiles (msec): 00:16:54.051 | 1.00th=[ 262], 5.00th=[ 266], 10.00th=[ 271], 20.00th=[ 309], 00:16:54.051 | 30.00th=[ 575], 40.00th=[ 1116], 50.00th=[ 1200], 60.00th=[ 1250], 00:16:54.051 | 70.00th=[ 2467], 80.00th=[ 3171], 90.00th=[ 5403], 95.00th=[ 5470], 00:16:54.051 | 99.00th=[ 5537], 99.50th=[ 5537], 99.90th=[ 5537], 99.95th=[ 5537], 00:16:54.051 | 99.99th=[ 5537] 00:16:54.051 bw ( KiB/s): min= 8192, max=464896, per=3.05%, avg=115687.80, stdev=136208.55, samples=10 00:16:54.051 iops : min= 8, max= 454, avg=112.90, stdev=133.01, samples=10 00:16:54.051 lat (msec) : 50=0.14%, 500=25.97%, 750=8.80%, 1000=2.74%, 2000=24.24% 00:16:54.051 lat (msec) : >=2000=38.10% 00:16:54.051 cpu : usr=0.03%, sys=1.35%, ctx=1299, majf=0, minf=32769 00:16:54.051 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:16:54.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.051 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:54.051 issued rwts: total=693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.051 job5: (groupid=0, jobs=1): err= 0: pid=778722: Tue Dec 10 04:04:47 2024 00:16:54.051 read: IOPS=30, BW=30.3MiB/s (31.8MB/s)(306MiB/10098msec) 00:16:54.051 slat (usec): min=750, max=2140.9k, avg=32741.95, stdev=204756.39 00:16:54.052 clat (msec): min=77, max=8261, avg=3931.85, stdev=3283.44 00:16:54.052 lat (msec): min=140, max=8270, avg=3964.59, stdev=3285.94 00:16:54.052 clat percentiles (msec): 00:16:54.052 | 1.00th=[ 148], 5.00th=[ 317], 10.00th=[ 550], 20.00th=[ 1083], 00:16:54.052 | 30.00th=[ 1318], 40.00th=[ 1368], 50.00th=[ 1452], 60.00th=[ 5604], 00:16:54.052 | 70.00th=[ 7819], 80.00th=[ 8020], 90.00th=[ 8087], 95.00th=[ 8154], 00:16:54.052 | 99.00th=[ 8221], 99.50th=[ 8288], 99.90th=[ 8288], 99.95th=[ 8288], 00:16:54.052 | 99.99th=[ 8288] 00:16:54.052 bw ( KiB/s): min=10240, max=96256, per=1.20%, avg=45552.62, stdev=27526.61, samples=8 00:16:54.052 iops : min= 10, max= 94, avg=44.38, stdev=26.81, samples=8 00:16:54.052 lat (msec) : 100=0.33%, 250=3.27%, 500=5.23%, 750=4.58%, 1000=4.90% 00:16:54.052 lat (msec) : 2000=35.62%, >=2000=46.08% 00:16:54.052 cpu : usr=0.01%, sys=1.04%, ctx=876, majf=0, minf=32769 00:16:54.052 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.5%, >=64=79.4% 00:16:54.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.052 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:16:54.052 issued rwts: total=306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.052 job5: (groupid=0, jobs=1): err= 0: pid=778723: Tue Dec 10 04:04:47 2024 00:16:54.052 read: IOPS=71, BW=71.1MiB/s (74.5MB/s)(761MiB/10708msec) 00:16:54.052 slat (usec): min=51, max=1858.5k, avg=14002.12, stdev=94662.44 00:16:54.052 clat (msec): min=49, max=3617, avg=1721.10, stdev=897.41 00:16:54.052 lat (msec): min=675, max=3666, avg=1735.10, stdev=898.65 00:16:54.052 clat percentiles (msec): 00:16:54.052 | 1.00th=[ 676], 5.00th=[ 735], 10.00th=[ 751], 20.00th=[ 760], 00:16:54.052 | 30.00th=[ 768], 40.00th=[ 1552], 50.00th=[ 1703], 60.00th=[ 1938], 00:16:54.052 | 70.00th=[ 2333], 80.00th=[ 2635], 90.00th=[ 3071], 95.00th=[ 3239], 00:16:54.052 | 99.00th=[ 3507], 99.50th=[ 3574], 99.90th=[ 3608], 99.95th=[ 3608], 00:16:54.052 | 99.99th=[ 3608] 00:16:54.052 bw ( KiB/s): min=28672, max=182272, per=2.44%, avg=92598.86, stdev=48051.20, samples=14 00:16:54.052 iops : min= 28, max= 178, avg=90.43, stdev=46.93, samples=14 00:16:54.052 lat (msec) : 50=0.13%, 750=12.88%, 1000=25.62%, 2000=27.20%, >=2000=34.17% 00:16:54.052 cpu : usr=0.04%, sys=1.32%, ctx=1216, majf=0, minf=32769 00:16:54.052 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:16:54.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.052 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:54.052 issued rwts: total=761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.052 job5: (groupid=0, jobs=1): err= 0: pid=778724: Tue Dec 10 04:04:47 2024 00:16:54.052 read: IOPS=131, BW=131MiB/s (138MB/s)(1328MiB/10113msec) 00:16:54.052 slat (usec): min=30, max=2044.9k, avg=7545.51, stdev=79426.76 00:16:54.052 clat (msec): min=87, max=2826, avg=773.42, stdev=727.87 00:16:54.052 lat (msec): min=114, max=2828, avg=780.96, stdev=731.83 00:16:54.052 clat percentiles (msec): 00:16:54.052 | 1.00th=[ 232], 5.00th=[ 363], 10.00th=[ 380], 20.00th=[ 388], 00:16:54.052 | 30.00th=[ 405], 40.00th=[ 439], 50.00th=[ 485], 60.00th=[ 531], 00:16:54.052 | 70.00th=[ 550], 80.00th=[ 835], 90.00th=[ 2668], 95.00th=[ 2769], 00:16:54.052 | 99.00th=[ 2802], 99.50th=[ 2836], 99.90th=[ 2836], 99.95th=[ 2836], 00:16:54.052 | 99.99th=[ 2836] 00:16:54.052 bw ( KiB/s): min=12288, max=346112, per=5.39%, avg=204747.17, stdev=105737.90, samples=12 00:16:54.052 iops : min= 12, max= 338, avg=199.92, stdev=103.22, samples=12 00:16:54.052 lat (msec) : 100=0.08%, 250=0.98%, 500=51.96%, 750=23.87%, 1000=7.53% 00:16:54.052 lat (msec) : 2000=4.44%, >=2000=11.14% 00:16:54.052 cpu : usr=0.01%, sys=1.59%, ctx=1767, majf=0, minf=32769 00:16:54.052 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:16:54.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.052 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.052 issued rwts: total=1328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.052 job5: (groupid=0, jobs=1): err= 0: pid=778725: Tue Dec 10 04:04:47 2024 00:16:54.052 read: IOPS=51, BW=52.0MiB/s (54.5MB/s)(559MiB/10754msec) 00:16:54.052 slat (usec): min=28, max=2079.6k, avg=19121.72, stdev=172039.44 00:16:54.052 clat (msec): min=62, max=7199, avg=1978.12, stdev=2690.69 00:16:54.052 lat (msec): min=243, max=7202, avg=1997.24, stdev=2697.03 00:16:54.052 clat percentiles (msec): 00:16:54.052 | 1.00th=[ 247], 5.00th=[ 253], 10.00th=[ 253], 20.00th=[ 255], 00:16:54.052 | 30.00th=[ 259], 40.00th=[ 268], 50.00th=[ 305], 60.00th=[ 550], 00:16:54.052 | 70.00th=[ 2039], 80.00th=[ 5134], 90.00th=[ 7080], 95.00th=[ 7148], 00:16:54.052 | 99.00th=[ 7215], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215], 00:16:54.052 | 99.99th=[ 7215] 00:16:54.052 bw ( KiB/s): min=10240, max=506890, per=3.87%, avg=146858.33, stdev=200486.25, samples=6 00:16:54.052 iops : min= 10, max= 495, avg=143.33, stdev=195.73, samples=6 00:16:54.052 lat (msec) : 100=0.18%, 250=2.33%, 500=55.46%, 750=7.87%, 1000=3.76% 00:16:54.052 lat (msec) : >=2000=30.41% 00:16:54.052 cpu : usr=0.00%, sys=0.82%, ctx=667, majf=0, minf=32769 00:16:54.052 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.7% 00:16:54.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.052 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:54.052 issued rwts: total=559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.052 job5: (groupid=0, jobs=1): err= 0: pid=778726: Tue Dec 10 04:04:47 2024 00:16:54.052 read: IOPS=13, BW=13.6MiB/s (14.2MB/s)(147MiB/10819msec) 00:16:54.052 slat (usec): min=431, max=2119.2k, avg=68211.69, stdev=338857.08 00:16:54.052 clat (msec): min=790, max=10771, avg=4956.94, stdev=4144.29 00:16:54.052 lat (msec): min=844, max=10775, avg=5025.16, stdev=4157.68 00:16:54.052 clat percentiles (msec): 00:16:54.052 | 1.00th=[ 844], 5.00th=[ 927], 10.00th=[ 1083], 20.00th=[ 1401], 00:16:54.052 | 30.00th=[ 1569], 40.00th=[ 1804], 50.00th=[ 1955], 60.00th=[ 4245], 00:16:54.052 | 70.00th=[ 9731], 80.00th=[10537], 90.00th=[10671], 95.00th=[10805], 00:16:54.052 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:16:54.052 | 99.99th=[10805] 00:16:54.052 bw ( KiB/s): min= 1822, max=38912, per=0.54%, avg=20367.00, stdev=26226.59, samples=2 00:16:54.052 iops : min= 1, max= 38, avg=19.50, stdev=26.16, samples=2 00:16:54.052 lat (msec) : 1000=6.12%, 2000=44.22%, >=2000=49.66% 00:16:54.052 cpu : usr=0.00%, sys=0.91%, ctx=354, majf=0, minf=32769 00:16:54.052 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=5.4%, 16=10.9%, 32=21.8%, >=64=57.1% 00:16:54.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.052 complete : 0=0.0%, 4=95.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.8% 00:16:54.052 issued rwts: total=147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.052 job5: (groupid=0, jobs=1): err= 0: pid=778727: Tue Dec 10 04:04:47 2024 00:16:54.052 read: IOPS=61, BW=61.4MiB/s (64.4MB/s)(617MiB/10049msec) 00:16:54.052 slat (usec): min=430, max=2261.4k, avg=16204.65, stdev=91769.21 00:16:54.052 clat (msec): min=47, max=3869, avg=1907.93, stdev=1034.62 00:16:54.052 lat (msec): min=50, max=3879, avg=1924.14, stdev=1036.26 00:16:54.052 clat percentiles (msec): 00:16:54.052 | 1.00th=[ 69], 5.00th=[ 355], 10.00th=[ 827], 20.00th=[ 1301], 00:16:54.052 | 30.00th=[ 1351], 40.00th=[ 1469], 50.00th=[ 1670], 60.00th=[ 1871], 00:16:54.052 | 70.00th=[ 1972], 80.00th=[ 3473], 90.00th=[ 3809], 95.00th=[ 3842], 00:16:54.052 | 99.00th=[ 3842], 99.50th=[ 3876], 99.90th=[ 3876], 99.95th=[ 3876], 00:16:54.052 | 99.99th=[ 3876] 00:16:54.052 bw ( KiB/s): min=16384, max=106496, per=1.76%, avg=66882.20, stdev=27793.37, samples=15 00:16:54.052 iops : min= 16, max= 104, avg=65.20, stdev=27.12, samples=15 00:16:54.052 lat (msec) : 50=0.16%, 100=1.30%, 250=2.43%, 500=2.59%, 750=2.59% 00:16:54.052 lat (msec) : 1000=3.08%, 2000=59.97%, >=2000=27.88% 00:16:54.052 cpu : usr=0.03%, sys=1.17%, ctx=1651, majf=0, minf=32769 00:16:54.052 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:16:54.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.052 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:54.052 issued rwts: total=617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.052 job5: (groupid=0, jobs=1): err= 0: pid=778728: Tue Dec 10 04:04:47 2024 00:16:54.052 read: IOPS=97, BW=97.2MiB/s (102MB/s)(976MiB/10043msec) 00:16:54.052 slat (usec): min=39, max=2106.7k, avg=10246.98, stdev=93936.71 00:16:54.052 clat (msec): min=38, max=3222, avg=965.38, stdev=804.44 00:16:54.052 lat (msec): min=43, max=3224, avg=975.63, stdev=810.04 00:16:54.052 clat percentiles (msec): 00:16:54.052 | 1.00th=[ 57], 5.00th=[ 351], 10.00th=[ 384], 20.00th=[ 468], 00:16:54.052 | 30.00th=[ 542], 40.00th=[ 550], 50.00th=[ 659], 60.00th=[ 810], 00:16:54.052 | 70.00th=[ 944], 80.00th=[ 1133], 90.00th=[ 2769], 95.00th=[ 3004], 00:16:54.052 | 99.00th=[ 3171], 99.50th=[ 3171], 99.90th=[ 3239], 99.95th=[ 3239], 00:16:54.052 | 99.99th=[ 3239] 00:16:54.052 bw ( KiB/s): min=26624, max=319488, per=3.82%, avg=144896.00, stdev=98838.31, samples=12 00:16:54.052 iops : min= 26, max= 312, avg=141.50, stdev=96.52, samples=12 00:16:54.052 lat (msec) : 50=0.51%, 100=0.72%, 250=1.43%, 500=21.00%, 750=31.97% 00:16:54.052 lat (msec) : 1000=17.32%, 2000=14.04%, >=2000=13.01% 00:16:54.052 cpu : usr=0.01%, sys=1.10%, ctx=1708, majf=0, minf=32769 00:16:54.052 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5% 00:16:54.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.052 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.052 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.053 job5: (groupid=0, jobs=1): err= 0: pid=778729: Tue Dec 10 04:04:47 2024 00:16:54.053 read: IOPS=25, BW=25.7MiB/s (27.0MB/s)(260MiB/10102msec) 00:16:54.053 slat (usec): min=451, max=2119.5k, avg=38584.21, stdev=223632.84 00:16:54.053 clat (msec): min=69, max=7515, avg=2030.04, stdev=1436.57 00:16:54.053 lat (msec): min=112, max=7517, avg=2068.63, stdev=1473.68 00:16:54.053 clat percentiles (msec): 00:16:54.053 | 1.00th=[ 115], 5.00th=[ 203], 10.00th=[ 330], 20.00th=[ 693], 00:16:54.053 | 30.00th=[ 1150], 40.00th=[ 1821], 50.00th=[ 2265], 60.00th=[ 2433], 00:16:54.053 | 70.00th=[ 2500], 80.00th=[ 2567], 90.00th=[ 3339], 95.00th=[ 3406], 00:16:54.053 | 99.00th=[ 7483], 99.50th=[ 7483], 99.90th=[ 7483], 99.95th=[ 7483], 00:16:54.053 | 99.99th=[ 7483] 00:16:54.053 bw ( KiB/s): min= 6144, max=90112, per=1.19%, avg=45056.00, stdev=34779.84, samples=6 00:16:54.053 iops : min= 6, max= 88, avg=44.00, stdev=33.96, samples=6 00:16:54.053 lat (msec) : 100=0.38%, 250=5.77%, 500=8.08%, 750=6.54%, 1000=6.15% 00:16:54.053 lat (msec) : 2000=13.85%, >=2000=59.23% 00:16:54.053 cpu : usr=0.00%, sys=0.95%, ctx=662, majf=0, minf=32769 00:16:54.053 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.1%, 16=6.2%, 32=12.3%, >=64=75.8% 00:16:54.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.053 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:16:54.053 issued rwts: total=260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.053 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.053 job5: (groupid=0, jobs=1): err= 0: pid=778730: Tue Dec 10 04:04:47 2024 00:16:54.053 read: IOPS=25, BW=25.1MiB/s (26.3MB/s)(269MiB/10716msec) 00:16:54.053 slat (usec): min=439, max=2086.7k, avg=39594.34, stdev=201542.98 00:16:54.053 clat (msec): min=63, max=6835, avg=2700.07, stdev=924.63 00:16:54.053 lat (msec): min=1057, max=6850, avg=2739.67, stdev=975.91 00:16:54.053 clat percentiles (msec): 00:16:54.053 | 1.00th=[ 1062], 5.00th=[ 1217], 10.00th=[ 1435], 20.00th=[ 1854], 00:16:54.053 | 30.00th=[ 2567], 40.00th=[ 2601], 50.00th=[ 2668], 60.00th=[ 2836], 00:16:54.053 | 70.00th=[ 2970], 80.00th=[ 3272], 90.00th=[ 3608], 95.00th=[ 3742], 00:16:54.053 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:16:54.053 | 99.99th=[ 6812] 00:16:54.053 bw ( KiB/s): min=12288, max=77824, per=1.09%, avg=41252.57, stdev=26078.28, samples=7 00:16:54.053 iops : min= 12, max= 76, avg=40.29, stdev=25.47, samples=7 00:16:54.053 lat (msec) : 100=0.37%, 2000=21.56%, >=2000=78.07% 00:16:54.053 cpu : usr=0.00%, sys=0.64%, ctx=842, majf=0, minf=32769 00:16:54.053 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.9%, >=64=76.6% 00:16:54.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.053 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:16:54.053 issued rwts: total=269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.053 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.053 00:16:54.053 Run status group 0 (all jobs): 00:16:54.053 READ: bw=3707MiB/s (3888MB/s), 1904KiB/s-178MiB/s (1950kB/s-187MB/s), io=39.5GiB (42.4GB), run=10024-10904msec 00:16:54.053 00:16:54.053 Disk stats (read/write): 00:16:54.053 nvme0n1: ios=67700/0, merge=0/0, ticks=7431106/0, in_queue=7431106, util=98.56% 00:16:54.053 nvme1n1: ios=42280/0, merge=0/0, ticks=7530491/0, in_queue=7530491, util=98.79% 00:16:54.053 nvme2n1: ios=45091/0, merge=0/0, ticks=7460151/0, in_queue=7460151, util=98.37% 00:16:54.053 nvme3n1: ios=70949/0, merge=0/0, ticks=7006945/0, in_queue=7006945, util=99.00% 00:16:54.053 nvme4n1: ios=32653/0, merge=0/0, ticks=6324880/0, in_queue=6324880, util=99.03% 00:16:54.053 nvme5n1: ios=62772/0, merge=0/0, ticks=6843601/0, in_queue=6843601, util=99.15% 00:16:54.053 04:04:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:16:54.053 04:04:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:16:54.053 04:04:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:54.053 04:04:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:16:54.310 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.310 04:04:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:16:54.310 04:04:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:16:54.310 04:04:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:54.310 04:04:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:16:54.310 04:04:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:54.310 04:04:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:16:54.310 04:04:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:16:54.310 04:04:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:54.310 04:04:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.310 04:04:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:54.310 04:04:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.310 04:04:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:54.310 04:04:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:55.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.242 04:04:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:16:55.242 04:04:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:16:55.242 04:04:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:55.242 04:04:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:16:55.242 04:04:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:16:55.242 04:04:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:55.242 04:04:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:16:55.242 04:04:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.242 04:04:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.242 04:04:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:55.242 04:04:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.242 04:04:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:55.242 04:04:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:56.172 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:56.172 04:04:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:16:56.172 04:04:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:16:56.172 04:04:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:56.173 04:04:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:16:56.173 04:04:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:56.173 04:04:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:16:56.173 04:04:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:16:56.173 04:04:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:56.173 04:04:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.173 04:04:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 04:04:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.173 04:04:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:56.173 04:04:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:57.103 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:57.104 04:04:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:16:57.104 04:04:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:16:57.104 04:04:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:57.104 04:04:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:16:57.104 04:04:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:57.104 04:04:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:16:57.104 04:04:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:16:57.104 04:04:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:57.104 04:04:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.104 04:04:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:57.104 04:04:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.104 04:04:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:57.104 04:04:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:58.035 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:58.035 04:04:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:16:58.035 04:04:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:16:58.292 04:04:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:58.292 04:04:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:16:58.292 04:04:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:58.292 04:04:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:16:58.292 04:04:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:16:58.292 04:04:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:58.292 04:04:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.292 04:04:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:58.292 04:04:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.292 04:04:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:58.292 04:04:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:59.224 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:59.224 rmmod nvme_rdma 00:16:59.224 rmmod nvme_fabrics 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 777135 ']' 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 777135 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 777135 ']' 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 777135 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 777135 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 777135' 00:16:59.224 killing process with pid 777135 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 777135 00:16:59.224 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 777135 00:16:59.482 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:59.482 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:59.482 00:16:59.482 real 0m30.805s 00:16:59.482 user 1m46.636s 00:16:59.482 sys 0m15.361s 00:16:59.482 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.482 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:59.482 ************************************ 00:16:59.482 END TEST nvmf_srq_overwhelm 00:16:59.482 ************************************ 00:16:59.741 04:04:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:16:59.741 04:04:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:59.741 04:04:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.741 04:04:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:59.741 ************************************ 00:16:59.741 START TEST nvmf_shutdown 00:16:59.741 ************************************ 00:16:59.741 04:04:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:16:59.741 * Looking for test storage... 00:16:59.741 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:59.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.741 --rc genhtml_branch_coverage=1 00:16:59.741 --rc genhtml_function_coverage=1 00:16:59.741 --rc genhtml_legend=1 00:16:59.741 --rc geninfo_all_blocks=1 00:16:59.741 --rc geninfo_unexecuted_blocks=1 00:16:59.741 00:16:59.741 ' 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:59.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.741 --rc genhtml_branch_coverage=1 00:16:59.741 --rc genhtml_function_coverage=1 00:16:59.741 --rc genhtml_legend=1 00:16:59.741 --rc geninfo_all_blocks=1 00:16:59.741 --rc geninfo_unexecuted_blocks=1 00:16:59.741 00:16:59.741 ' 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:59.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.741 --rc genhtml_branch_coverage=1 00:16:59.741 --rc genhtml_function_coverage=1 00:16:59.741 --rc genhtml_legend=1 00:16:59.741 --rc geninfo_all_blocks=1 00:16:59.741 --rc geninfo_unexecuted_blocks=1 00:16:59.741 00:16:59.741 ' 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:59.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.741 --rc genhtml_branch_coverage=1 00:16:59.741 --rc genhtml_function_coverage=1 00:16:59.741 --rc genhtml_legend=1 00:16:59.741 --rc geninfo_all_blocks=1 00:16:59.741 --rc geninfo_unexecuted_blocks=1 00:16:59.741 00:16:59.741 ' 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.741 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.742 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.742 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.742 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.742 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.742 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.742 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.742 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.742 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:59.742 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:59.742 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:59.742 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:59.742 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.742 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:00.000 ************************************ 00:17:00.000 START TEST nvmf_shutdown_tc1 00:17:00.000 ************************************ 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:00.000 04:04:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:06.556 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:06.557 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:06.557 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:06.557 Found net devices under 0000:18:00.0: mlx_0_0 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:06.557 Found net devices under 0000:18:00.1: mlx_0_1 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:06.557 04:04:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:06.557 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:06.557 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:06.557 altname enp24s0f0np0 00:17:06.557 altname ens785f0np0 00:17:06.557 inet 192.168.100.8/24 scope global mlx_0_0 00:17:06.557 valid_lft forever preferred_lft forever 00:17:06.557 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:06.557 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:06.557 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:06.557 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:06.557 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:06.557 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:06.557 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:06.557 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:06.557 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:06.557 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:06.557 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:06.557 altname enp24s0f1np1 00:17:06.557 altname ens785f1np1 00:17:06.557 inet 192.168.100.9/24 scope global mlx_0_1 00:17:06.557 valid_lft forever preferred_lft forever 00:17:06.557 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:17:06.557 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:06.558 192.168.100.9' 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:06.558 192.168.100.9' 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:06.558 192.168.100.9' 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=785072 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 785072 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 785072 ']' 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:06.558 [2024-12-10 04:05:00.159791] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:06.558 [2024-12-10 04:05:00.159831] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.558 [2024-12-10 04:05:00.218675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.558 [2024-12-10 04:05:00.258391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.558 [2024-12-10 04:05:00.258423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.558 [2024-12-10 04:05:00.258430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.558 [2024-12-10 04:05:00.258435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.558 [2024-12-10 04:05:00.258440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.558 [2024-12-10 04:05:00.259802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.558 [2024-12-10 04:05:00.259818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.558 [2024-12-10 04:05:00.259922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.558 [2024-12-10 04:05:00.259923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:06.558 [2024-12-10 04:05:00.425383] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13d63c0/0x13da8b0) succeed. 00:17:06.558 [2024-12-10 04:05:00.433536] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13d7a50/0x141bf50) succeed. 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:06.558 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:06.559 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:06.559 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:06.559 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:06.559 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:06.559 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:06.559 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:06.559 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:06.559 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:06.559 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:06.559 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:17:06.559 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.559 04:05:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:06.559 Malloc1 00:17:06.559 [2024-12-10 04:05:00.658327] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:06.559 Malloc2 00:17:06.559 Malloc3 00:17:06.559 Malloc4 00:17:06.559 Malloc5 00:17:06.559 Malloc6 00:17:06.559 Malloc7 00:17:06.820 Malloc8 00:17:06.820 Malloc9 00:17:06.820 Malloc10 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=785245 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 785245 /var/tmp/bdevperf.sock 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 785245 ']' 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:06.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:06.820 { 00:17:06.820 "params": { 00:17:06.820 "name": "Nvme$subsystem", 00:17:06.820 "trtype": "$TEST_TRANSPORT", 00:17:06.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.820 "adrfam": "ipv4", 00:17:06.820 "trsvcid": "$NVMF_PORT", 00:17:06.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.820 "hdgst": ${hdgst:-false}, 00:17:06.820 "ddgst": ${ddgst:-false} 00:17:06.820 }, 00:17:06.820 "method": "bdev_nvme_attach_controller" 00:17:06.820 } 00:17:06.820 EOF 00:17:06.820 )") 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:06.820 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:06.820 { 00:17:06.820 "params": { 00:17:06.820 "name": "Nvme$subsystem", 00:17:06.820 "trtype": "$TEST_TRANSPORT", 00:17:06.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.820 "adrfam": "ipv4", 00:17:06.821 "trsvcid": "$NVMF_PORT", 00:17:06.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.821 "hdgst": ${hdgst:-false}, 00:17:06.821 "ddgst": ${ddgst:-false} 00:17:06.821 }, 00:17:06.821 "method": "bdev_nvme_attach_controller" 00:17:06.821 } 00:17:06.821 EOF 00:17:06.821 )") 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:06.821 { 00:17:06.821 "params": { 00:17:06.821 "name": "Nvme$subsystem", 00:17:06.821 "trtype": "$TEST_TRANSPORT", 00:17:06.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.821 "adrfam": "ipv4", 00:17:06.821 "trsvcid": "$NVMF_PORT", 00:17:06.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.821 "hdgst": ${hdgst:-false}, 00:17:06.821 "ddgst": ${ddgst:-false} 00:17:06.821 }, 00:17:06.821 "method": "bdev_nvme_attach_controller" 00:17:06.821 } 00:17:06.821 EOF 00:17:06.821 )") 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:06.821 { 00:17:06.821 "params": { 00:17:06.821 "name": "Nvme$subsystem", 00:17:06.821 "trtype": "$TEST_TRANSPORT", 00:17:06.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.821 "adrfam": "ipv4", 00:17:06.821 "trsvcid": "$NVMF_PORT", 00:17:06.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.821 "hdgst": ${hdgst:-false}, 00:17:06.821 "ddgst": ${ddgst:-false} 00:17:06.821 }, 00:17:06.821 "method": "bdev_nvme_attach_controller" 00:17:06.821 } 00:17:06.821 EOF 00:17:06.821 )") 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:06.821 { 00:17:06.821 "params": { 00:17:06.821 "name": "Nvme$subsystem", 00:17:06.821 "trtype": "$TEST_TRANSPORT", 00:17:06.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.821 "adrfam": "ipv4", 00:17:06.821 "trsvcid": "$NVMF_PORT", 00:17:06.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.821 "hdgst": ${hdgst:-false}, 00:17:06.821 "ddgst": ${ddgst:-false} 00:17:06.821 }, 00:17:06.821 "method": "bdev_nvme_attach_controller" 00:17:06.821 } 00:17:06.821 EOF 00:17:06.821 )") 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:06.821 { 00:17:06.821 "params": { 00:17:06.821 "name": "Nvme$subsystem", 00:17:06.821 "trtype": "$TEST_TRANSPORT", 00:17:06.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.821 "adrfam": "ipv4", 00:17:06.821 "trsvcid": "$NVMF_PORT", 00:17:06.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.821 "hdgst": ${hdgst:-false}, 00:17:06.821 "ddgst": ${ddgst:-false} 00:17:06.821 }, 00:17:06.821 "method": "bdev_nvme_attach_controller" 00:17:06.821 } 00:17:06.821 EOF 00:17:06.821 )") 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:06.821 [2024-12-10 04:05:01.140105] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:06.821 [2024-12-10 04:05:01.140149] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:06.821 { 00:17:06.821 "params": { 00:17:06.821 "name": "Nvme$subsystem", 00:17:06.821 "trtype": "$TEST_TRANSPORT", 00:17:06.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.821 "adrfam": "ipv4", 00:17:06.821 "trsvcid": "$NVMF_PORT", 00:17:06.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.821 "hdgst": ${hdgst:-false}, 00:17:06.821 "ddgst": ${ddgst:-false} 00:17:06.821 }, 00:17:06.821 "method": "bdev_nvme_attach_controller" 00:17:06.821 } 00:17:06.821 EOF 00:17:06.821 )") 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:06.821 { 00:17:06.821 "params": { 00:17:06.821 "name": "Nvme$subsystem", 00:17:06.821 "trtype": "$TEST_TRANSPORT", 00:17:06.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.821 "adrfam": "ipv4", 00:17:06.821 "trsvcid": "$NVMF_PORT", 00:17:06.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.821 "hdgst": ${hdgst:-false}, 00:17:06.821 "ddgst": ${ddgst:-false} 00:17:06.821 }, 00:17:06.821 "method": "bdev_nvme_attach_controller" 00:17:06.821 } 00:17:06.821 EOF 00:17:06.821 )") 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:06.821 { 00:17:06.821 "params": { 00:17:06.821 "name": "Nvme$subsystem", 00:17:06.821 "trtype": "$TEST_TRANSPORT", 00:17:06.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.821 "adrfam": "ipv4", 00:17:06.821 "trsvcid": "$NVMF_PORT", 00:17:06.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.821 "hdgst": ${hdgst:-false}, 00:17:06.821 "ddgst": ${ddgst:-false} 00:17:06.821 }, 00:17:06.821 "method": "bdev_nvme_attach_controller" 00:17:06.821 } 00:17:06.821 EOF 00:17:06.821 )") 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:06.821 { 00:17:06.821 "params": { 00:17:06.821 "name": "Nvme$subsystem", 00:17:06.821 "trtype": "$TEST_TRANSPORT", 00:17:06.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.821 "adrfam": "ipv4", 00:17:06.821 "trsvcid": "$NVMF_PORT", 00:17:06.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.821 "hdgst": ${hdgst:-false}, 00:17:06.821 "ddgst": ${ddgst:-false} 00:17:06.821 }, 00:17:06.821 "method": "bdev_nvme_attach_controller" 00:17:06.821 } 00:17:06.821 EOF 00:17:06.821 )") 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:17:06.821 04:05:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:06.821 "params": { 00:17:06.821 "name": "Nvme1", 00:17:06.821 "trtype": "rdma", 00:17:06.821 "traddr": "192.168.100.8", 00:17:06.821 "adrfam": "ipv4", 00:17:06.821 "trsvcid": "4420", 00:17:06.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.821 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:06.821 "hdgst": false, 00:17:06.821 "ddgst": false 00:17:06.821 }, 00:17:06.821 "method": "bdev_nvme_attach_controller" 00:17:06.821 },{ 00:17:06.822 "params": { 00:17:06.822 "name": "Nvme2", 00:17:06.822 "trtype": "rdma", 00:17:06.822 "traddr": "192.168.100.8", 00:17:06.822 "adrfam": "ipv4", 00:17:06.822 "trsvcid": "4420", 00:17:06.822 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:06.822 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:06.822 "hdgst": false, 00:17:06.822 "ddgst": false 00:17:06.822 }, 00:17:06.822 "method": "bdev_nvme_attach_controller" 00:17:06.822 },{ 00:17:06.822 "params": { 00:17:06.822 "name": "Nvme3", 00:17:06.822 "trtype": "rdma", 00:17:06.822 "traddr": "192.168.100.8", 00:17:06.822 "adrfam": "ipv4", 00:17:06.822 "trsvcid": "4420", 00:17:06.822 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:06.822 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:06.822 "hdgst": false, 00:17:06.822 "ddgst": false 00:17:06.822 }, 00:17:06.822 "method": "bdev_nvme_attach_controller" 00:17:06.822 },{ 00:17:06.822 "params": { 00:17:06.822 "name": "Nvme4", 00:17:06.822 "trtype": "rdma", 00:17:06.822 "traddr": "192.168.100.8", 00:17:06.822 "adrfam": "ipv4", 00:17:06.822 "trsvcid": "4420", 00:17:06.822 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:06.822 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:06.822 "hdgst": false, 00:17:06.822 "ddgst": false 00:17:06.822 }, 00:17:06.822 "method": "bdev_nvme_attach_controller" 00:17:06.822 },{ 00:17:06.822 "params": { 00:17:06.822 "name": "Nvme5", 00:17:06.822 "trtype": "rdma", 00:17:06.822 "traddr": "192.168.100.8", 00:17:06.822 "adrfam": "ipv4", 00:17:06.822 "trsvcid": "4420", 00:17:06.822 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:06.822 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:06.822 "hdgst": false, 00:17:06.822 "ddgst": false 00:17:06.822 }, 00:17:06.822 "method": "bdev_nvme_attach_controller" 00:17:06.822 },{ 00:17:06.822 "params": { 00:17:06.822 "name": "Nvme6", 00:17:06.822 "trtype": "rdma", 00:17:06.822 "traddr": "192.168.100.8", 00:17:06.822 "adrfam": "ipv4", 00:17:06.822 "trsvcid": "4420", 00:17:06.822 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:06.822 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:06.822 "hdgst": false, 00:17:06.822 "ddgst": false 00:17:06.822 }, 00:17:06.822 "method": "bdev_nvme_attach_controller" 00:17:06.822 },{ 00:17:06.822 "params": { 00:17:06.822 "name": "Nvme7", 00:17:06.822 "trtype": "rdma", 00:17:06.822 "traddr": "192.168.100.8", 00:17:06.822 "adrfam": "ipv4", 00:17:06.822 "trsvcid": "4420", 00:17:06.822 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:06.822 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:06.822 "hdgst": false, 00:17:06.822 "ddgst": false 00:17:06.822 }, 00:17:06.822 "method": "bdev_nvme_attach_controller" 00:17:06.822 },{ 00:17:06.822 "params": { 00:17:06.822 "name": "Nvme8", 00:17:06.822 "trtype": "rdma", 00:17:06.822 "traddr": "192.168.100.8", 00:17:06.822 "adrfam": "ipv4", 00:17:06.822 "trsvcid": "4420", 00:17:06.822 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:06.822 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:06.822 "hdgst": false, 00:17:06.822 "ddgst": false 00:17:06.822 }, 00:17:06.822 "method": "bdev_nvme_attach_controller" 00:17:06.822 },{ 00:17:06.822 "params": { 00:17:06.822 "name": "Nvme9", 00:17:06.822 "trtype": "rdma", 00:17:06.822 "traddr": "192.168.100.8", 00:17:06.822 "adrfam": "ipv4", 00:17:06.822 "trsvcid": "4420", 00:17:06.822 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:06.822 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:06.822 "hdgst": false, 00:17:06.822 "ddgst": false 00:17:06.822 }, 00:17:06.822 "method": "bdev_nvme_attach_controller" 00:17:06.822 },{ 00:17:06.822 "params": { 00:17:06.822 "name": "Nvme10", 00:17:06.822 "trtype": "rdma", 00:17:06.822 "traddr": "192.168.100.8", 00:17:06.822 "adrfam": "ipv4", 00:17:06.822 "trsvcid": "4420", 00:17:06.822 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:06.822 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:06.822 "hdgst": false, 00:17:06.822 "ddgst": false 00:17:06.822 }, 00:17:06.822 "method": "bdev_nvme_attach_controller" 00:17:06.822 }' 00:17:06.822 [2024-12-10 04:05:01.201261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.081 [2024-12-10 04:05:01.239404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.012 04:05:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.012 04:05:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:17:08.012 04:05:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:08.012 04:05:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.012 04:05:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:08.012 04:05:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.012 04:05:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 785245 00:17:08.012 04:05:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:17:08.012 04:05:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:17:08.944 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 785245 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:17:08.944 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 785072 00:17:08.944 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:08.944 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:08.944 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:17:08.944 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:17:08.944 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:08.944 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:08.944 { 00:17:08.944 "params": { 00:17:08.944 "name": "Nvme$subsystem", 00:17:08.944 "trtype": "$TEST_TRANSPORT", 00:17:08.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.944 "adrfam": "ipv4", 00:17:08.944 "trsvcid": "$NVMF_PORT", 00:17:08.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.944 "hdgst": ${hdgst:-false}, 00:17:08.944 "ddgst": ${ddgst:-false} 00:17:08.944 }, 00:17:08.944 "method": "bdev_nvme_attach_controller" 00:17:08.944 } 00:17:08.944 EOF 00:17:08.944 )") 00:17:08.944 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:08.944 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:08.944 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:08.944 { 00:17:08.944 "params": { 00:17:08.944 "name": "Nvme$subsystem", 00:17:08.944 "trtype": "$TEST_TRANSPORT", 00:17:08.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.944 "adrfam": "ipv4", 00:17:08.944 "trsvcid": "$NVMF_PORT", 00:17:08.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.944 "hdgst": ${hdgst:-false}, 00:17:08.944 "ddgst": ${ddgst:-false} 00:17:08.944 }, 00:17:08.944 "method": "bdev_nvme_attach_controller" 00:17:08.944 } 00:17:08.944 EOF 00:17:08.944 )") 00:17:08.944 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:08.944 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:08.944 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:08.944 { 00:17:08.944 "params": { 00:17:08.944 "name": "Nvme$subsystem", 00:17:08.944 "trtype": "$TEST_TRANSPORT", 00:17:08.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.944 "adrfam": "ipv4", 00:17:08.944 "trsvcid": "$NVMF_PORT", 00:17:08.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.944 "hdgst": ${hdgst:-false}, 00:17:08.944 "ddgst": ${ddgst:-false} 00:17:08.944 }, 00:17:08.944 "method": "bdev_nvme_attach_controller" 00:17:08.945 } 00:17:08.945 EOF 00:17:08.945 )") 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:08.945 { 00:17:08.945 "params": { 00:17:08.945 "name": "Nvme$subsystem", 00:17:08.945 "trtype": "$TEST_TRANSPORT", 00:17:08.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.945 "adrfam": "ipv4", 00:17:08.945 "trsvcid": "$NVMF_PORT", 00:17:08.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.945 "hdgst": ${hdgst:-false}, 00:17:08.945 "ddgst": ${ddgst:-false} 00:17:08.945 }, 00:17:08.945 "method": "bdev_nvme_attach_controller" 00:17:08.945 } 00:17:08.945 EOF 00:17:08.945 )") 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:08.945 { 00:17:08.945 "params": { 00:17:08.945 "name": "Nvme$subsystem", 00:17:08.945 "trtype": "$TEST_TRANSPORT", 00:17:08.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.945 "adrfam": "ipv4", 00:17:08.945 "trsvcid": "$NVMF_PORT", 00:17:08.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.945 "hdgst": ${hdgst:-false}, 00:17:08.945 "ddgst": ${ddgst:-false} 00:17:08.945 }, 00:17:08.945 "method": "bdev_nvme_attach_controller" 00:17:08.945 } 00:17:08.945 EOF 00:17:08.945 )") 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:08.945 { 00:17:08.945 "params": { 00:17:08.945 "name": "Nvme$subsystem", 00:17:08.945 "trtype": "$TEST_TRANSPORT", 00:17:08.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.945 "adrfam": "ipv4", 00:17:08.945 "trsvcid": "$NVMF_PORT", 00:17:08.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.945 "hdgst": ${hdgst:-false}, 00:17:08.945 "ddgst": ${ddgst:-false} 00:17:08.945 }, 00:17:08.945 "method": "bdev_nvme_attach_controller" 00:17:08.945 } 00:17:08.945 EOF 00:17:08.945 )") 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:08.945 [2024-12-10 04:05:03.146691] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:08.945 [2024-12-10 04:05:03.146737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785673 ] 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:08.945 { 00:17:08.945 "params": { 00:17:08.945 "name": "Nvme$subsystem", 00:17:08.945 "trtype": "$TEST_TRANSPORT", 00:17:08.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.945 "adrfam": "ipv4", 00:17:08.945 "trsvcid": "$NVMF_PORT", 00:17:08.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.945 "hdgst": ${hdgst:-false}, 00:17:08.945 "ddgst": ${ddgst:-false} 00:17:08.945 }, 00:17:08.945 "method": "bdev_nvme_attach_controller" 00:17:08.945 } 00:17:08.945 EOF 00:17:08.945 )") 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:08.945 { 00:17:08.945 "params": { 00:17:08.945 "name": "Nvme$subsystem", 00:17:08.945 "trtype": "$TEST_TRANSPORT", 00:17:08.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.945 "adrfam": "ipv4", 00:17:08.945 "trsvcid": "$NVMF_PORT", 00:17:08.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.945 "hdgst": ${hdgst:-false}, 00:17:08.945 "ddgst": ${ddgst:-false} 00:17:08.945 }, 00:17:08.945 "method": "bdev_nvme_attach_controller" 00:17:08.945 } 00:17:08.945 EOF 00:17:08.945 )") 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:08.945 { 00:17:08.945 "params": { 00:17:08.945 "name": "Nvme$subsystem", 00:17:08.945 "trtype": "$TEST_TRANSPORT", 00:17:08.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.945 "adrfam": "ipv4", 00:17:08.945 "trsvcid": "$NVMF_PORT", 00:17:08.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.945 "hdgst": ${hdgst:-false}, 00:17:08.945 "ddgst": ${ddgst:-false} 00:17:08.945 }, 00:17:08.945 "method": "bdev_nvme_attach_controller" 00:17:08.945 } 00:17:08.945 EOF 00:17:08.945 )") 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:08.945 { 00:17:08.945 "params": { 00:17:08.945 "name": "Nvme$subsystem", 00:17:08.945 "trtype": "$TEST_TRANSPORT", 00:17:08.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.945 "adrfam": "ipv4", 00:17:08.945 "trsvcid": "$NVMF_PORT", 00:17:08.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.945 "hdgst": ${hdgst:-false}, 00:17:08.945 "ddgst": ${ddgst:-false} 00:17:08.945 }, 00:17:08.945 "method": "bdev_nvme_attach_controller" 00:17:08.945 } 00:17:08.945 EOF 00:17:08.945 )") 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:17:08.945 04:05:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:08.945 "params": { 00:17:08.945 "name": "Nvme1", 00:17:08.945 "trtype": "rdma", 00:17:08.945 "traddr": "192.168.100.8", 00:17:08.945 "adrfam": "ipv4", 00:17:08.945 "trsvcid": "4420", 00:17:08.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:08.945 "hdgst": false, 00:17:08.945 "ddgst": false 00:17:08.945 }, 00:17:08.945 "method": "bdev_nvme_attach_controller" 00:17:08.945 },{ 00:17:08.945 "params": { 00:17:08.945 "name": "Nvme2", 00:17:08.945 "trtype": "rdma", 00:17:08.945 "traddr": "192.168.100.8", 00:17:08.945 "adrfam": "ipv4", 00:17:08.945 "trsvcid": "4420", 00:17:08.945 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:08.945 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:08.945 "hdgst": false, 00:17:08.945 "ddgst": false 00:17:08.945 }, 00:17:08.945 "method": "bdev_nvme_attach_controller" 00:17:08.945 },{ 00:17:08.945 "params": { 00:17:08.945 "name": "Nvme3", 00:17:08.945 "trtype": "rdma", 00:17:08.945 "traddr": "192.168.100.8", 00:17:08.945 "adrfam": "ipv4", 00:17:08.945 "trsvcid": "4420", 00:17:08.945 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:08.945 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:08.945 "hdgst": false, 00:17:08.945 "ddgst": false 00:17:08.945 }, 00:17:08.945 "method": "bdev_nvme_attach_controller" 00:17:08.945 },{ 00:17:08.945 "params": { 00:17:08.945 "name": "Nvme4", 00:17:08.945 "trtype": "rdma", 00:17:08.945 "traddr": "192.168.100.8", 00:17:08.945 "adrfam": "ipv4", 00:17:08.945 "trsvcid": "4420", 00:17:08.945 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:08.945 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:08.945 "hdgst": false, 00:17:08.945 "ddgst": false 00:17:08.945 }, 00:17:08.945 "method": "bdev_nvme_attach_controller" 00:17:08.945 },{ 00:17:08.945 "params": { 00:17:08.945 "name": "Nvme5", 00:17:08.945 "trtype": "rdma", 00:17:08.945 "traddr": "192.168.100.8", 00:17:08.945 "adrfam": "ipv4", 00:17:08.945 "trsvcid": "4420", 00:17:08.945 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:08.945 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:08.945 "hdgst": false, 00:17:08.945 "ddgst": false 00:17:08.945 }, 00:17:08.945 "method": "bdev_nvme_attach_controller" 00:17:08.945 },{ 00:17:08.945 "params": { 00:17:08.945 "name": "Nvme6", 00:17:08.945 "trtype": "rdma", 00:17:08.945 "traddr": "192.168.100.8", 00:17:08.945 "adrfam": "ipv4", 00:17:08.945 "trsvcid": "4420", 00:17:08.945 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:08.945 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:08.945 "hdgst": false, 00:17:08.945 "ddgst": false 00:17:08.945 }, 00:17:08.945 "method": "bdev_nvme_attach_controller" 00:17:08.945 },{ 00:17:08.945 "params": { 00:17:08.945 "name": "Nvme7", 00:17:08.946 "trtype": "rdma", 00:17:08.946 "traddr": "192.168.100.8", 00:17:08.946 "adrfam": "ipv4", 00:17:08.946 "trsvcid": "4420", 00:17:08.946 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:08.946 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:08.946 "hdgst": false, 00:17:08.946 "ddgst": false 00:17:08.946 }, 00:17:08.946 "method": "bdev_nvme_attach_controller" 00:17:08.946 },{ 00:17:08.946 "params": { 00:17:08.946 "name": "Nvme8", 00:17:08.946 "trtype": "rdma", 00:17:08.946 "traddr": "192.168.100.8", 00:17:08.946 "adrfam": "ipv4", 00:17:08.946 "trsvcid": "4420", 00:17:08.946 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:08.946 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:08.946 "hdgst": false, 00:17:08.946 "ddgst": false 00:17:08.946 }, 00:17:08.946 "method": "bdev_nvme_attach_controller" 00:17:08.946 },{ 00:17:08.946 "params": { 00:17:08.946 "name": "Nvme9", 00:17:08.946 "trtype": "rdma", 00:17:08.946 "traddr": "192.168.100.8", 00:17:08.946 "adrfam": "ipv4", 00:17:08.946 "trsvcid": "4420", 00:17:08.946 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:08.946 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:08.946 "hdgst": false, 00:17:08.946 "ddgst": false 00:17:08.946 }, 00:17:08.946 "method": "bdev_nvme_attach_controller" 00:17:08.946 },{ 00:17:08.946 "params": { 00:17:08.946 "name": "Nvme10", 00:17:08.946 "trtype": "rdma", 00:17:08.946 "traddr": "192.168.100.8", 00:17:08.946 "adrfam": "ipv4", 00:17:08.946 "trsvcid": "4420", 00:17:08.946 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:08.946 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:08.946 "hdgst": false, 00:17:08.946 "ddgst": false 00:17:08.946 }, 00:17:08.946 "method": "bdev_nvme_attach_controller" 00:17:08.946 }' 00:17:08.946 [2024-12-10 04:05:03.207260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.946 [2024-12-10 04:05:03.245298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.878 Running I/O for 1 seconds... 00:17:11.250 3760.00 IOPS, 235.00 MiB/s 00:17:11.250 Latency(us) 00:17:11.250 [2024-12-10T03:05:05.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.250 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.250 Verification LBA range: start 0x0 length 0x400 00:17:11.250 Nvme1n1 : 1.16 406.72 25.42 0.00 0.00 154577.06 6213.78 205054.86 00:17:11.250 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.250 Verification LBA range: start 0x0 length 0x400 00:17:11.250 Nvme2n1 : 1.16 401.23 25.08 0.00 0.00 153413.67 7621.59 153014.42 00:17:11.250 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.250 Verification LBA range: start 0x0 length 0x400 00:17:11.250 Nvme3n1 : 1.16 409.52 25.60 0.00 0.00 148899.55 15146.10 145247.19 00:17:11.250 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.250 Verification LBA range: start 0x0 length 0x400 00:17:11.250 Nvme4n1 : 1.16 403.19 25.20 0.00 0.00 148852.82 4951.61 135149.80 00:17:11.250 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.250 Verification LBA range: start 0x0 length 0x400 00:17:11.250 Nvme5n1 : 1.16 395.16 24.70 0.00 0.00 149602.91 22427.88 123498.95 00:17:11.250 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.250 Verification LBA range: start 0x0 length 0x400 00:17:11.250 Nvme6n1 : 1.16 413.77 25.86 0.00 0.00 141967.26 22524.97 117285.17 00:17:11.250 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.250 Verification LBA range: start 0x0 length 0x400 00:17:11.250 Nvme7n1 : 1.16 404.83 25.30 0.00 0.00 142672.85 18447.17 107964.49 00:17:11.250 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.250 Verification LBA range: start 0x0 length 0x400 00:17:11.250 Nvme8n1 : 1.16 407.94 25.50 0.00 0.00 139854.38 11699.39 100197.26 00:17:11.250 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.250 Verification LBA range: start 0x0 length 0x400 00:17:11.250 Nvme9n1 : 1.16 389.64 24.35 0.00 0.00 143393.90 9854.67 92818.39 00:17:11.250 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.250 Verification LBA range: start 0x0 length 0x400 00:17:11.250 Nvme10n1 : 1.17 381.83 23.86 0.00 0.00 147285.55 8980.86 212822.09 00:17:11.250 [2024-12-10T03:05:05.639Z] =================================================================================================================== 00:17:11.250 [2024-12-10T03:05:05.639Z] Total : 4013.83 250.86 0.00 0.00 147032.81 4951.61 212822.09 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:11.250 rmmod nvme_rdma 00:17:11.250 rmmod nvme_fabrics 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 785072 ']' 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 785072 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 785072 ']' 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 785072 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.250 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785072 00:17:11.507 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:11.507 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:11.507 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785072' 00:17:11.507 killing process with pid 785072 00:17:11.507 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 785072 00:17:11.507 04:05:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 785072 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:11.766 00:17:11.766 real 0m11.899s 00:17:11.766 user 0m27.423s 00:17:11.766 sys 0m5.359s 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:11.766 ************************************ 00:17:11.766 END TEST nvmf_shutdown_tc1 00:17:11.766 ************************************ 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:11.766 ************************************ 00:17:11.766 START TEST nvmf_shutdown_tc2 00:17:11.766 ************************************ 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.766 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:12.026 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:12.026 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:12.026 Found net devices under 0000:18:00.0: mlx_0_0 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:12.026 Found net devices under 0000:18:00.1: mlx_0_1 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:12.026 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:12.026 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:12.026 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:12.026 altname enp24s0f0np0 00:17:12.026 altname ens785f0np0 00:17:12.026 inet 192.168.100.8/24 scope global mlx_0_0 00:17:12.026 valid_lft forever preferred_lft forever 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:12.027 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:12.027 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:12.027 altname enp24s0f1np1 00:17:12.027 altname ens785f1np1 00:17:12.027 inet 192.168.100.9/24 scope global mlx_0_1 00:17:12.027 valid_lft forever preferred_lft forever 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:12.027 192.168.100.9' 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:12.027 192.168.100.9' 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:12.027 192.168.100.9' 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=786305 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 786305 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 786305 ']' 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.027 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:12.285 [2024-12-10 04:05:06.415654] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:12.285 [2024-12-10 04:05:06.415701] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.285 [2024-12-10 04:05:06.474891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.285 [2024-12-10 04:05:06.512442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.285 [2024-12-10 04:05:06.512478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.285 [2024-12-10 04:05:06.512488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.285 [2024-12-10 04:05:06.512493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.285 [2024-12-10 04:05:06.512498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.285 [2024-12-10 04:05:06.513952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.285 [2024-12-10 04:05:06.514012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.285 [2024-12-10 04:05:06.514143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.285 [2024-12-10 04:05:06.514145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:12.285 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.285 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:17:12.285 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.285 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:12.285 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:12.285 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.285 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:12.286 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.286 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:12.543 [2024-12-10 04:05:06.677659] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9b73c0/0x9bb8b0) succeed. 00:17:12.543 [2024-12-10 04:05:06.685916] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9b8a50/0x9fcf50) succeed. 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.543 04:05:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:12.543 Malloc1 00:17:12.543 [2024-12-10 04:05:06.893148] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:12.543 Malloc2 00:17:12.801 Malloc3 00:17:12.801 Malloc4 00:17:12.801 Malloc5 00:17:12.801 Malloc6 00:17:12.801 Malloc7 00:17:13.059 Malloc8 00:17:13.059 Malloc9 00:17:13.059 Malloc10 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=786611 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 786611 /var/tmp/bdevperf.sock 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 786611 ']' 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:13.059 { 00:17:13.059 "params": { 00:17:13.059 "name": "Nvme$subsystem", 00:17:13.059 "trtype": "$TEST_TRANSPORT", 00:17:13.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.059 "adrfam": "ipv4", 00:17:13.059 "trsvcid": "$NVMF_PORT", 00:17:13.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.059 "hdgst": ${hdgst:-false}, 00:17:13.059 "ddgst": ${ddgst:-false} 00:17:13.059 }, 00:17:13.059 "method": "bdev_nvme_attach_controller" 00:17:13.059 } 00:17:13.059 EOF 00:17:13.059 )") 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:13.059 { 00:17:13.059 "params": { 00:17:13.059 "name": "Nvme$subsystem", 00:17:13.059 "trtype": "$TEST_TRANSPORT", 00:17:13.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.059 "adrfam": "ipv4", 00:17:13.059 "trsvcid": "$NVMF_PORT", 00:17:13.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.059 "hdgst": ${hdgst:-false}, 00:17:13.059 "ddgst": ${ddgst:-false} 00:17:13.059 }, 00:17:13.059 "method": "bdev_nvme_attach_controller" 00:17:13.059 } 00:17:13.059 EOF 00:17:13.059 )") 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:13.059 { 00:17:13.059 "params": { 00:17:13.059 "name": "Nvme$subsystem", 00:17:13.059 "trtype": "$TEST_TRANSPORT", 00:17:13.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.059 "adrfam": "ipv4", 00:17:13.059 "trsvcid": "$NVMF_PORT", 00:17:13.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.059 "hdgst": ${hdgst:-false}, 00:17:13.059 "ddgst": ${ddgst:-false} 00:17:13.059 }, 00:17:13.059 "method": "bdev_nvme_attach_controller" 00:17:13.059 } 00:17:13.059 EOF 00:17:13.059 )") 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:13.059 { 00:17:13.059 "params": { 00:17:13.059 "name": "Nvme$subsystem", 00:17:13.059 "trtype": "$TEST_TRANSPORT", 00:17:13.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.059 "adrfam": "ipv4", 00:17:13.059 "trsvcid": "$NVMF_PORT", 00:17:13.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.059 "hdgst": ${hdgst:-false}, 00:17:13.059 "ddgst": ${ddgst:-false} 00:17:13.059 }, 00:17:13.059 "method": "bdev_nvme_attach_controller" 00:17:13.059 } 00:17:13.059 EOF 00:17:13.059 )") 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:13.059 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:13.059 { 00:17:13.059 "params": { 00:17:13.059 "name": "Nvme$subsystem", 00:17:13.059 "trtype": "$TEST_TRANSPORT", 00:17:13.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.059 "adrfam": "ipv4", 00:17:13.059 "trsvcid": "$NVMF_PORT", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.060 "hdgst": ${hdgst:-false}, 00:17:13.060 "ddgst": ${ddgst:-false} 00:17:13.060 }, 00:17:13.060 "method": "bdev_nvme_attach_controller" 00:17:13.060 } 00:17:13.060 EOF 00:17:13.060 )") 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:13.060 { 00:17:13.060 "params": { 00:17:13.060 "name": "Nvme$subsystem", 00:17:13.060 "trtype": "$TEST_TRANSPORT", 00:17:13.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.060 "adrfam": "ipv4", 00:17:13.060 "trsvcid": "$NVMF_PORT", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.060 "hdgst": ${hdgst:-false}, 00:17:13.060 "ddgst": ${ddgst:-false} 00:17:13.060 }, 00:17:13.060 "method": "bdev_nvme_attach_controller" 00:17:13.060 } 00:17:13.060 EOF 00:17:13.060 )") 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:13.060 [2024-12-10 04:05:07.373064] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:13.060 [2024-12-10 04:05:07.373109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786611 ] 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:13.060 { 00:17:13.060 "params": { 00:17:13.060 "name": "Nvme$subsystem", 00:17:13.060 "trtype": "$TEST_TRANSPORT", 00:17:13.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.060 "adrfam": "ipv4", 00:17:13.060 "trsvcid": "$NVMF_PORT", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.060 "hdgst": ${hdgst:-false}, 00:17:13.060 "ddgst": ${ddgst:-false} 00:17:13.060 }, 00:17:13.060 "method": "bdev_nvme_attach_controller" 00:17:13.060 } 00:17:13.060 EOF 00:17:13.060 )") 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:13.060 { 00:17:13.060 "params": { 00:17:13.060 "name": "Nvme$subsystem", 00:17:13.060 "trtype": "$TEST_TRANSPORT", 00:17:13.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.060 "adrfam": "ipv4", 00:17:13.060 "trsvcid": "$NVMF_PORT", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.060 "hdgst": ${hdgst:-false}, 00:17:13.060 "ddgst": ${ddgst:-false} 00:17:13.060 }, 00:17:13.060 "method": "bdev_nvme_attach_controller" 00:17:13.060 } 00:17:13.060 EOF 00:17:13.060 )") 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:13.060 { 00:17:13.060 "params": { 00:17:13.060 "name": "Nvme$subsystem", 00:17:13.060 "trtype": "$TEST_TRANSPORT", 00:17:13.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.060 "adrfam": "ipv4", 00:17:13.060 "trsvcid": "$NVMF_PORT", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.060 "hdgst": ${hdgst:-false}, 00:17:13.060 "ddgst": ${ddgst:-false} 00:17:13.060 }, 00:17:13.060 "method": "bdev_nvme_attach_controller" 00:17:13.060 } 00:17:13.060 EOF 00:17:13.060 )") 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:13.060 { 00:17:13.060 "params": { 00:17:13.060 "name": "Nvme$subsystem", 00:17:13.060 "trtype": "$TEST_TRANSPORT", 00:17:13.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.060 "adrfam": "ipv4", 00:17:13.060 "trsvcid": "$NVMF_PORT", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.060 "hdgst": ${hdgst:-false}, 00:17:13.060 "ddgst": ${ddgst:-false} 00:17:13.060 }, 00:17:13.060 "method": "bdev_nvme_attach_controller" 00:17:13.060 } 00:17:13.060 EOF 00:17:13.060 )") 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:17:13.060 04:05:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:13.060 "params": { 00:17:13.060 "name": "Nvme1", 00:17:13.060 "trtype": "rdma", 00:17:13.060 "traddr": "192.168.100.8", 00:17:13.060 "adrfam": "ipv4", 00:17:13.060 "trsvcid": "4420", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:13.060 "hdgst": false, 00:17:13.060 "ddgst": false 00:17:13.060 }, 00:17:13.060 "method": "bdev_nvme_attach_controller" 00:17:13.060 },{ 00:17:13.060 "params": { 00:17:13.060 "name": "Nvme2", 00:17:13.060 "trtype": "rdma", 00:17:13.060 "traddr": "192.168.100.8", 00:17:13.060 "adrfam": "ipv4", 00:17:13.060 "trsvcid": "4420", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:13.060 "hdgst": false, 00:17:13.060 "ddgst": false 00:17:13.060 }, 00:17:13.060 "method": "bdev_nvme_attach_controller" 00:17:13.060 },{ 00:17:13.060 "params": { 00:17:13.060 "name": "Nvme3", 00:17:13.060 "trtype": "rdma", 00:17:13.060 "traddr": "192.168.100.8", 00:17:13.060 "adrfam": "ipv4", 00:17:13.060 "trsvcid": "4420", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:13.060 "hdgst": false, 00:17:13.060 "ddgst": false 00:17:13.060 }, 00:17:13.060 "method": "bdev_nvme_attach_controller" 00:17:13.060 },{ 00:17:13.060 "params": { 00:17:13.060 "name": "Nvme4", 00:17:13.060 "trtype": "rdma", 00:17:13.060 "traddr": "192.168.100.8", 00:17:13.060 "adrfam": "ipv4", 00:17:13.060 "trsvcid": "4420", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:13.060 "hdgst": false, 00:17:13.060 "ddgst": false 00:17:13.060 }, 00:17:13.060 "method": "bdev_nvme_attach_controller" 00:17:13.060 },{ 00:17:13.060 "params": { 00:17:13.060 "name": "Nvme5", 00:17:13.060 "trtype": "rdma", 00:17:13.060 "traddr": "192.168.100.8", 00:17:13.060 "adrfam": "ipv4", 00:17:13.060 "trsvcid": "4420", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:13.060 "hdgst": false, 00:17:13.060 "ddgst": false 00:17:13.060 }, 00:17:13.060 "method": "bdev_nvme_attach_controller" 00:17:13.060 },{ 00:17:13.060 "params": { 00:17:13.060 "name": "Nvme6", 00:17:13.060 "trtype": "rdma", 00:17:13.060 "traddr": "192.168.100.8", 00:17:13.060 "adrfam": "ipv4", 00:17:13.060 "trsvcid": "4420", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:13.060 "hdgst": false, 00:17:13.060 "ddgst": false 00:17:13.060 }, 00:17:13.060 "method": "bdev_nvme_attach_controller" 00:17:13.060 },{ 00:17:13.060 "params": { 00:17:13.060 "name": "Nvme7", 00:17:13.060 "trtype": "rdma", 00:17:13.060 "traddr": "192.168.100.8", 00:17:13.060 "adrfam": "ipv4", 00:17:13.060 "trsvcid": "4420", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:13.060 "hdgst": false, 00:17:13.060 "ddgst": false 00:17:13.060 }, 00:17:13.060 "method": "bdev_nvme_attach_controller" 00:17:13.060 },{ 00:17:13.060 "params": { 00:17:13.060 "name": "Nvme8", 00:17:13.060 "trtype": "rdma", 00:17:13.060 "traddr": "192.168.100.8", 00:17:13.060 "adrfam": "ipv4", 00:17:13.060 "trsvcid": "4420", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:13.060 "hdgst": false, 00:17:13.060 "ddgst": false 00:17:13.060 }, 00:17:13.060 "method": "bdev_nvme_attach_controller" 00:17:13.060 },{ 00:17:13.060 "params": { 00:17:13.060 "name": "Nvme9", 00:17:13.060 "trtype": "rdma", 00:17:13.060 "traddr": "192.168.100.8", 00:17:13.060 "adrfam": "ipv4", 00:17:13.060 "trsvcid": "4420", 00:17:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:13.060 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:13.060 "hdgst": false, 00:17:13.061 "ddgst": false 00:17:13.061 }, 00:17:13.061 "method": "bdev_nvme_attach_controller" 00:17:13.061 },{ 00:17:13.061 "params": { 00:17:13.061 "name": "Nvme10", 00:17:13.061 "trtype": "rdma", 00:17:13.061 "traddr": "192.168.100.8", 00:17:13.061 "adrfam": "ipv4", 00:17:13.061 "trsvcid": "4420", 00:17:13.061 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:13.061 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:13.061 "hdgst": false, 00:17:13.061 "ddgst": false 00:17:13.061 }, 00:17:13.061 "method": "bdev_nvme_attach_controller" 00:17:13.061 }' 00:17:13.061 [2024-12-10 04:05:07.432591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.318 [2024-12-10 04:05:07.470988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.249 Running I/O for 10 seconds... 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:17:14.250 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:17:14.507 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:17:14.507 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:14.507 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:14.507 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:14.507 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.507 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=155 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 155 -ge 100 ']' 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 786611 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 786611 ']' 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 786611 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786611 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786611' 00:17:14.765 killing process with pid 786611 00:17:14.765 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 786611 00:17:14.766 04:05:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 786611 00:17:14.766 Received shutdown signal, test time was about 0.700582 seconds 00:17:14.766 00:17:14.766 Latency(us) 00:17:14.766 [2024-12-10T03:05:09.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.766 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:14.766 Verification LBA range: start 0x0 length 0x400 00:17:14.766 Nvme1n1 : 0.69 405.36 25.33 0.00 0.00 154670.51 6310.87 215928.98 00:17:14.766 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:14.766 Verification LBA range: start 0x0 length 0x400 00:17:14.766 Nvme2n1 : 0.69 393.30 24.58 0.00 0.00 155296.94 8349.77 155344.59 00:17:14.766 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:14.766 Verification LBA range: start 0x0 length 0x400 00:17:14.766 Nvme3n1 : 0.69 415.78 25.99 0.00 0.00 144164.70 8495.41 148354.09 00:17:14.766 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:14.766 Verification LBA range: start 0x0 length 0x400 00:17:14.766 Nvme4n1 : 0.69 425.22 26.58 0.00 0.00 137849.36 4441.88 138256.69 00:17:14.766 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:14.766 Verification LBA range: start 0x0 length 0x400 00:17:14.766 Nvme5n1 : 0.69 391.60 24.47 0.00 0.00 145833.31 8689.59 132042.90 00:17:14.766 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:14.766 Verification LBA range: start 0x0 length 0x400 00:17:14.766 Nvme6n1 : 0.70 412.57 25.79 0.00 0.00 135661.21 8786.68 125052.40 00:17:14.766 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:14.766 Verification LBA range: start 0x0 length 0x400 00:17:14.766 Nvme7n1 : 0.70 413.38 25.84 0.00 0.00 132221.75 8932.31 118061.89 00:17:14.766 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:14.766 Verification LBA range: start 0x0 length 0x400 00:17:14.766 Nvme8n1 : 0.70 402.78 25.17 0.00 0.00 132256.00 9029.40 110294.66 00:17:14.766 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:14.766 Verification LBA range: start 0x0 length 0x400 00:17:14.766 Nvme9n1 : 0.70 378.03 23.63 0.00 0.00 136944.13 8835.22 100973.99 00:17:14.766 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:14.766 Verification LBA range: start 0x0 length 0x400 00:17:14.766 Nvme10n1 : 0.70 365.81 22.86 0.00 0.00 139696.45 8883.77 166995.44 00:17:14.766 [2024-12-10T03:05:09.155Z] =================================================================================================================== 00:17:14.766 [2024-12-10T03:05:09.155Z] Total : 4003.83 250.24 0.00 0.00 141392.12 4441.88 215928.98 00:17:15.023 04:05:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 786305 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:15.955 rmmod nvme_rdma 00:17:15.955 rmmod nvme_fabrics 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 786305 ']' 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 786305 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 786305 ']' 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 786305 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.955 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786305 00:17:16.213 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:16.213 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:16.213 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786305' 00:17:16.213 killing process with pid 786305 00:17:16.213 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 786305 00:17:16.213 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 786305 00:17:16.470 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:16.470 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:16.470 00:17:16.470 real 0m4.667s 00:17:16.470 user 0m18.669s 00:17:16.470 sys 0m0.991s 00:17:16.470 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.470 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:16.470 ************************************ 00:17:16.470 END TEST nvmf_shutdown_tc2 00:17:16.471 ************************************ 00:17:16.471 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:17:16.471 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:16.471 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.471 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:16.730 ************************************ 00:17:16.730 START TEST nvmf_shutdown_tc3 00:17:16.730 ************************************ 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:16.730 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:16.730 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:16.730 Found net devices under 0000:18:00.0: mlx_0_0 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:16.730 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:16.731 Found net devices under 0000:18:00.1: mlx_0_1 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:16.731 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:16.731 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:16.731 altname enp24s0f0np0 00:17:16.731 altname ens785f0np0 00:17:16.731 inet 192.168.100.8/24 scope global mlx_0_0 00:17:16.731 valid_lft forever preferred_lft forever 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:16.731 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:16.731 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:16.731 altname enp24s0f1np1 00:17:16.731 altname ens785f1np1 00:17:16.731 inet 192.168.100.9/24 scope global mlx_0_1 00:17:16.731 valid_lft forever preferred_lft forever 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:16.731 04:05:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:16.731 192.168.100.9' 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:16.731 192.168.100.9' 00:17:16.731 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:16.732 192.168.100.9' 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=787267 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 787267 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 787267 ']' 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.732 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:16.989 [2024-12-10 04:05:11.140044] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:16.989 [2024-12-10 04:05:11.140087] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.989 [2024-12-10 04:05:11.198086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.989 [2024-12-10 04:05:11.237654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.989 [2024-12-10 04:05:11.237687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.989 [2024-12-10 04:05:11.237694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.989 [2024-12-10 04:05:11.237700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.989 [2024-12-10 04:05:11.237704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.989 [2024-12-10 04:05:11.239015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.989 [2024-12-10 04:05:11.239089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.989 [2024-12-10 04:05:11.239198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.989 [2024-12-10 04:05:11.239200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:16.989 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.989 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:17:16.989 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:16.989 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:16.989 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:16.989 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.989 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:16.989 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.989 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:17.246 [2024-12-10 04:05:11.394614] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x153a3c0/0x153e8b0) succeed. 00:17:17.246 [2024-12-10 04:05:11.402803] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x153ba50/0x157ff50) succeed. 00:17:17.246 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.246 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:17:17.246 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.247 04:05:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:17.247 Malloc1 00:17:17.247 [2024-12-10 04:05:11.614450] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:17.504 Malloc2 00:17:17.504 Malloc3 00:17:17.504 Malloc4 00:17:17.504 Malloc5 00:17:17.504 Malloc6 00:17:17.504 Malloc7 00:17:17.762 Malloc8 00:17:17.762 Malloc9 00:17:17.762 Malloc10 00:17:17.762 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.762 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=787571 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 787571 /var/tmp/bdevperf.sock 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 787571 ']' 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:17.763 { 00:17:17.763 "params": { 00:17:17.763 "name": "Nvme$subsystem", 00:17:17.763 "trtype": "$TEST_TRANSPORT", 00:17:17.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.763 "adrfam": "ipv4", 00:17:17.763 "trsvcid": "$NVMF_PORT", 00:17:17.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.763 "hdgst": ${hdgst:-false}, 00:17:17.763 "ddgst": ${ddgst:-false} 00:17:17.763 }, 00:17:17.763 "method": "bdev_nvme_attach_controller" 00:17:17.763 } 00:17:17.763 EOF 00:17:17.763 )") 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:17.763 { 00:17:17.763 "params": { 00:17:17.763 "name": "Nvme$subsystem", 00:17:17.763 "trtype": "$TEST_TRANSPORT", 00:17:17.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.763 "adrfam": "ipv4", 00:17:17.763 "trsvcid": "$NVMF_PORT", 00:17:17.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.763 "hdgst": ${hdgst:-false}, 00:17:17.763 "ddgst": ${ddgst:-false} 00:17:17.763 }, 00:17:17.763 "method": "bdev_nvme_attach_controller" 00:17:17.763 } 00:17:17.763 EOF 00:17:17.763 )") 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:17.763 { 00:17:17.763 "params": { 00:17:17.763 "name": "Nvme$subsystem", 00:17:17.763 "trtype": "$TEST_TRANSPORT", 00:17:17.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.763 "adrfam": "ipv4", 00:17:17.763 "trsvcid": "$NVMF_PORT", 00:17:17.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.763 "hdgst": ${hdgst:-false}, 00:17:17.763 "ddgst": ${ddgst:-false} 00:17:17.763 }, 00:17:17.763 "method": "bdev_nvme_attach_controller" 00:17:17.763 } 00:17:17.763 EOF 00:17:17.763 )") 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:17.763 { 00:17:17.763 "params": { 00:17:17.763 "name": "Nvme$subsystem", 00:17:17.763 "trtype": "$TEST_TRANSPORT", 00:17:17.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.763 "adrfam": "ipv4", 00:17:17.763 "trsvcid": "$NVMF_PORT", 00:17:17.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.763 "hdgst": ${hdgst:-false}, 00:17:17.763 "ddgst": ${ddgst:-false} 00:17:17.763 }, 00:17:17.763 "method": "bdev_nvme_attach_controller" 00:17:17.763 } 00:17:17.763 EOF 00:17:17.763 )") 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:17.763 { 00:17:17.763 "params": { 00:17:17.763 "name": "Nvme$subsystem", 00:17:17.763 "trtype": "$TEST_TRANSPORT", 00:17:17.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.763 "adrfam": "ipv4", 00:17:17.763 "trsvcid": "$NVMF_PORT", 00:17:17.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.763 "hdgst": ${hdgst:-false}, 00:17:17.763 "ddgst": ${ddgst:-false} 00:17:17.763 }, 00:17:17.763 "method": "bdev_nvme_attach_controller" 00:17:17.763 } 00:17:17.763 EOF 00:17:17.763 )") 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:17.763 { 00:17:17.763 "params": { 00:17:17.763 "name": "Nvme$subsystem", 00:17:17.763 "trtype": "$TEST_TRANSPORT", 00:17:17.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.763 "adrfam": "ipv4", 00:17:17.763 "trsvcid": "$NVMF_PORT", 00:17:17.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.763 "hdgst": ${hdgst:-false}, 00:17:17.763 "ddgst": ${ddgst:-false} 00:17:17.763 }, 00:17:17.763 "method": "bdev_nvme_attach_controller" 00:17:17.763 } 00:17:17.763 EOF 00:17:17.763 )") 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:17.763 { 00:17:17.763 "params": { 00:17:17.763 "name": "Nvme$subsystem", 00:17:17.763 "trtype": "$TEST_TRANSPORT", 00:17:17.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.763 "adrfam": "ipv4", 00:17:17.763 "trsvcid": "$NVMF_PORT", 00:17:17.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.763 "hdgst": ${hdgst:-false}, 00:17:17.763 "ddgst": ${ddgst:-false} 00:17:17.763 }, 00:17:17.763 "method": "bdev_nvme_attach_controller" 00:17:17.763 } 00:17:17.763 EOF 00:17:17.763 )") 00:17:17.763 [2024-12-10 04:05:12.089619] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:17.763 [2024-12-10 04:05:12.089665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787571 ] 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:17.763 { 00:17:17.763 "params": { 00:17:17.763 "name": "Nvme$subsystem", 00:17:17.763 "trtype": "$TEST_TRANSPORT", 00:17:17.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.763 "adrfam": "ipv4", 00:17:17.763 "trsvcid": "$NVMF_PORT", 00:17:17.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.763 "hdgst": ${hdgst:-false}, 00:17:17.763 "ddgst": ${ddgst:-false} 00:17:17.763 }, 00:17:17.763 "method": "bdev_nvme_attach_controller" 00:17:17.763 } 00:17:17.763 EOF 00:17:17.763 )") 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:17.763 { 00:17:17.763 "params": { 00:17:17.763 "name": "Nvme$subsystem", 00:17:17.763 "trtype": "$TEST_TRANSPORT", 00:17:17.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.763 "adrfam": "ipv4", 00:17:17.763 "trsvcid": "$NVMF_PORT", 00:17:17.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.763 "hdgst": ${hdgst:-false}, 00:17:17.763 "ddgst": ${ddgst:-false} 00:17:17.763 }, 00:17:17.763 "method": "bdev_nvme_attach_controller" 00:17:17.763 } 00:17:17.763 EOF 00:17:17.763 )") 00:17:17.763 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:17.764 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:17.764 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:17.764 { 00:17:17.764 "params": { 00:17:17.764 "name": "Nvme$subsystem", 00:17:17.764 "trtype": "$TEST_TRANSPORT", 00:17:17.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.764 "adrfam": "ipv4", 00:17:17.764 "trsvcid": "$NVMF_PORT", 00:17:17.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.764 "hdgst": ${hdgst:-false}, 00:17:17.764 "ddgst": ${ddgst:-false} 00:17:17.764 }, 00:17:17.764 "method": "bdev_nvme_attach_controller" 00:17:17.764 } 00:17:17.764 EOF 00:17:17.764 )") 00:17:17.764 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:17.764 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:17:17.764 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:17:17.764 04:05:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:17.764 "params": { 00:17:17.764 "name": "Nvme1", 00:17:17.764 "trtype": "rdma", 00:17:17.764 "traddr": "192.168.100.8", 00:17:17.764 "adrfam": "ipv4", 00:17:17.764 "trsvcid": "4420", 00:17:17.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.764 "hdgst": false, 00:17:17.764 "ddgst": false 00:17:17.764 }, 00:17:17.764 "method": "bdev_nvme_attach_controller" 00:17:17.764 },{ 00:17:17.764 "params": { 00:17:17.764 "name": "Nvme2", 00:17:17.764 "trtype": "rdma", 00:17:17.764 "traddr": "192.168.100.8", 00:17:17.764 "adrfam": "ipv4", 00:17:17.764 "trsvcid": "4420", 00:17:17.764 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:17.764 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:17.764 "hdgst": false, 00:17:17.764 "ddgst": false 00:17:17.764 }, 00:17:17.764 "method": "bdev_nvme_attach_controller" 00:17:17.764 },{ 00:17:17.764 "params": { 00:17:17.764 "name": "Nvme3", 00:17:17.764 "trtype": "rdma", 00:17:17.764 "traddr": "192.168.100.8", 00:17:17.764 "adrfam": "ipv4", 00:17:17.764 "trsvcid": "4420", 00:17:17.764 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:17.764 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:17.764 "hdgst": false, 00:17:17.764 "ddgst": false 00:17:17.764 }, 00:17:17.764 "method": "bdev_nvme_attach_controller" 00:17:17.764 },{ 00:17:17.764 "params": { 00:17:17.764 "name": "Nvme4", 00:17:17.764 "trtype": "rdma", 00:17:17.764 "traddr": "192.168.100.8", 00:17:17.764 "adrfam": "ipv4", 00:17:17.764 "trsvcid": "4420", 00:17:17.764 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:17.764 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:17.764 "hdgst": false, 00:17:17.764 "ddgst": false 00:17:17.764 }, 00:17:17.764 "method": "bdev_nvme_attach_controller" 00:17:17.764 },{ 00:17:17.764 "params": { 00:17:17.764 "name": "Nvme5", 00:17:17.764 "trtype": "rdma", 00:17:17.764 "traddr": "192.168.100.8", 00:17:17.764 "adrfam": "ipv4", 00:17:17.764 "trsvcid": "4420", 00:17:17.764 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:17.764 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:17.764 "hdgst": false, 00:17:17.764 "ddgst": false 00:17:17.764 }, 00:17:17.764 "method": "bdev_nvme_attach_controller" 00:17:17.764 },{ 00:17:17.764 "params": { 00:17:17.764 "name": "Nvme6", 00:17:17.764 "trtype": "rdma", 00:17:17.764 "traddr": "192.168.100.8", 00:17:17.764 "adrfam": "ipv4", 00:17:17.764 "trsvcid": "4420", 00:17:17.764 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:17.764 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:17.764 "hdgst": false, 00:17:17.764 "ddgst": false 00:17:17.764 }, 00:17:17.764 "method": "bdev_nvme_attach_controller" 00:17:17.764 },{ 00:17:17.764 "params": { 00:17:17.764 "name": "Nvme7", 00:17:17.764 "trtype": "rdma", 00:17:17.764 "traddr": "192.168.100.8", 00:17:17.764 "adrfam": "ipv4", 00:17:17.764 "trsvcid": "4420", 00:17:17.764 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:17.764 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:17.764 "hdgst": false, 00:17:17.764 "ddgst": false 00:17:17.764 }, 00:17:17.764 "method": "bdev_nvme_attach_controller" 00:17:17.764 },{ 00:17:17.764 "params": { 00:17:17.764 "name": "Nvme8", 00:17:17.764 "trtype": "rdma", 00:17:17.764 "traddr": "192.168.100.8", 00:17:17.764 "adrfam": "ipv4", 00:17:17.764 "trsvcid": "4420", 00:17:17.764 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:17.764 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:17.764 "hdgst": false, 00:17:17.764 "ddgst": false 00:17:17.764 }, 00:17:17.764 "method": "bdev_nvme_attach_controller" 00:17:17.764 },{ 00:17:17.764 "params": { 00:17:17.764 "name": "Nvme9", 00:17:17.764 "trtype": "rdma", 00:17:17.764 "traddr": "192.168.100.8", 00:17:17.764 "adrfam": "ipv4", 00:17:17.764 "trsvcid": "4420", 00:17:17.764 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:17.764 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:17.764 "hdgst": false, 00:17:17.764 "ddgst": false 00:17:17.764 }, 00:17:17.764 "method": "bdev_nvme_attach_controller" 00:17:17.764 },{ 00:17:17.764 "params": { 00:17:17.764 "name": "Nvme10", 00:17:17.764 "trtype": "rdma", 00:17:17.764 "traddr": "192.168.100.8", 00:17:17.764 "adrfam": "ipv4", 00:17:17.764 "trsvcid": "4420", 00:17:17.764 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:17.764 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:17.764 "hdgst": false, 00:17:17.764 "ddgst": false 00:17:17.764 }, 00:17:17.764 "method": "bdev_nvme_attach_controller" 00:17:17.764 }' 00:17:18.025 [2024-12-10 04:05:12.150297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.025 [2024-12-10 04:05:12.188352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.958 Running I/O for 10 seconds... 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=27 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 27 -ge 100 ']' 00:17:18.958 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:17:19.216 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:17:19.216 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:19.216 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:19.216 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:19.216 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.216 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=179 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 179 -ge 100 ']' 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 787267 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 787267 ']' 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 787267 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 787267 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 787267' 00:17:19.474 killing process with pid 787267 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 787267 00:17:19.474 04:05:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 787267 00:17:20.040 2723.00 IOPS, 170.19 MiB/s [2024-12-10T03:05:14.429Z] 04:05:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:17:20.650 [2024-12-10 04:05:14.803419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.803454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.803464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.803471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.803494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.803499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.803506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.803511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.805727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:20.651 [2024-12-10 04:05:14.805767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:17:20.651 [2024-12-10 04:05:14.805814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.805839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.805863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.805893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.805917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.805938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.805961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.805982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.808011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:20.651 [2024-12-10 04:05:14.808044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:17:20.651 [2024-12-10 04:05:14.808088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.808112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.808136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.808168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.808177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.808185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.808194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.808201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.810462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:20.651 [2024-12-10 04:05:14.810495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:17:20.651 [2024-12-10 04:05:14.810535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.810559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.810582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.810603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.810626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.810647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.810670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.810691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.812995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:20.651 [2024-12-10 04:05:14.813027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:17:20.651 [2024-12-10 04:05:14.813071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.813094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.813119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.813139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.813162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.813184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.813207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.813228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.815114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:20.651 [2024-12-10 04:05:14.815146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:20.651 [2024-12-10 04:05:14.815183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.815206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.815230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.815250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.815282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.815304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.815326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.815347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.817674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:20.651 [2024-12-10 04:05:14.817706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:17:20.651 [2024-12-10 04:05:14.817747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.817770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.817793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.817815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.817845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.817867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.817890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.817910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.820229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:20.651 [2024-12-10 04:05:14.820261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:17:20.651 [2024-12-10 04:05:14.820341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.820366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.820389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.820409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.820432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.820454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.820477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.651 [2024-12-10 04:05:14.820498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.651 [2024-12-10 04:05:14.822778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:20.651 [2024-12-10 04:05:14.822810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:17:20.652 [2024-12-10 04:05:14.822846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.652 [2024-12-10 04:05:14.822869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.822892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.652 [2024-12-10 04:05:14.822913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.822936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.652 [2024-12-10 04:05:14.822957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.822989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.652 [2024-12-10 04:05:14.822997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.824926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:20.652 [2024-12-10 04:05:14.824966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:17:20.652 [2024-12-10 04:05:14.825007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.652 [2024-12-10 04:05:14.825030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.825054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.652 [2024-12-10 04:05:14.825075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.825098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.652 [2024-12-10 04:05:14.825119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.825142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.652 [2024-12-10 04:05:14.825163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:1 sqhd:5990 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.827383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:20.652 [2024-12-10 04:05:14.827414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:17:20.652 [2024-12-10 04:05:14.829819] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:17:20.652 [2024-12-10 04:05:14.832373] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:17:20.652 [2024-12-10 04:05:14.834738] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:17:20.652 [2024-12-10 04:05:14.837154] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:17:20.652 [2024-12-10 04:05:14.839377] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:17:20.652 [2024-12-10 04:05:14.841896] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:17:20.652 [2024-12-10 04:05:14.843937] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:17:20.652 [2024-12-10 04:05:14.846039] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:17:20.652 [2024-12-10 04:05:14.848141] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:17:20.652 [2024-12-10 04:05:14.848281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196dfd80 len:0x10000 key:0x183c00 00:17:20.652 [2024-12-10 04:05:14.848294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x183c00 00:17:20.652 [2024-12-10 04:05:14.848326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x183c00 00:17:20.652 [2024-12-10 04:05:14.848354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afc00 len:0x10000 key:0x183c00 00:17:20.652 [2024-12-10 04:05:14.848376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969fb80 len:0x10000 key:0x183c00 00:17:20.652 [2024-12-10 04:05:14.848398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968fb00 len:0x10000 key:0x183c00 00:17:20.652 [2024-12-10 04:05:14.848419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x183c00 00:17:20.652 [2024-12-10 04:05:14.848440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966fa00 len:0x10000 key:0x183c00 00:17:20.652 [2024-12-10 04:05:14.848461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001965f980 len:0x10000 key:0x183c00 00:17:20.652 [2024-12-10 04:05:14.848483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001964f900 len:0x10000 key:0x183c00 00:17:20.652 [2024-12-10 04:05:14.848504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001963f880 len:0x10000 key:0x183c00 00:17:20.652 [2024-12-10 04:05:14.848525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f800 len:0x10000 key:0x183c00 00:17:20.652 [2024-12-10 04:05:14.848546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001961f780 len:0x10000 key:0x183c00 00:17:20.652 [2024-12-10 04:05:14.848567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001960f700 len:0x10000 key:0x183c00 00:17:20.652 [2024-12-10 04:05:14.848591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194af600 len:0x10000 key:0x183700 00:17:20.652 [2024-12-10 04:05:14.848613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001949f580 len:0x10000 key:0x183700 00:17:20.652 [2024-12-10 04:05:14.848635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001948f500 len:0x10000 key:0x183700 00:17:20.652 [2024-12-10 04:05:14.848656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001947f480 len:0x10000 key:0x183700 00:17:20.652 [2024-12-10 04:05:14.848677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001946f400 len:0x10000 key:0x183700 00:17:20.652 [2024-12-10 04:05:14.848698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001945f380 len:0x10000 key:0x183700 00:17:20.652 [2024-12-10 04:05:14.848719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001944f300 len:0x10000 key:0x183700 00:17:20.652 [2024-12-10 04:05:14.848740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001943f280 len:0x10000 key:0x183700 00:17:20.652 [2024-12-10 04:05:14.848762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001942f200 len:0x10000 key:0x183700 00:17:20.652 [2024-12-10 04:05:14.848783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.652 [2024-12-10 04:05:14.848796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001941f180 len:0x10000 key:0x183700 00:17:20.652 [2024-12-10 04:05:14.848804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.848817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001940f100 len:0x10000 key:0x183700 00:17:20.653 [2024-12-10 04:05:14.848843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.848857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199f0000 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.848865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.848878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199dff80 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.848887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.848899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199cff00 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.848908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.848920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199bfe80 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.848929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.848942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199afe00 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.848950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.848963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001999fd80 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.848971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.848984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001998fd00 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.848993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001997fc80 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001996fc00 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001995fb80 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001994fb00 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001993fa80 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001992fa00 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001991f980 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001990f900 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ff880 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ef800 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198df780 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198cf700 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198bf680 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198af600 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989f580 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001988f500 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987f480 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001986f400 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001985f380 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001984f300 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001983f280 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001982f200 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001981f180 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001980f100 len:0x10000 key:0x182100 00:17:20.653 [2024-12-10 04:05:14.849519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x184000 00:17:20.653 [2024-12-10 04:05:14.849540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bdff80 len:0x10000 key:0x184000 00:17:20.653 [2024-12-10 04:05:14.849561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bcff00 len:0x10000 key:0x184000 00:17:20.653 [2024-12-10 04:05:14.849585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.653 [2024-12-10 04:05:14.849597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bbfe80 len:0x10000 key:0x184000 00:17:20.654 [2024-12-10 04:05:14.849606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.654 [2024-12-10 04:05:14.849619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bafe00 len:0x10000 key:0x184000 00:17:20.654 [2024-12-10 04:05:14.849628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.654 [2024-12-10 04:05:14.849641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019b9fd80 len:0x10000 key:0x184000 00:17:20.654 [2024-12-10 04:05:14.849649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.654 [2024-12-10 04:05:14.849662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019b8fd00 len:0x10000 key:0x184000 00:17:20.654 [2024-12-10 04:05:14.849671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.654 [2024-12-10 04:05:14.849684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196efe00 len:0x10000 key:0x183c00 00:17:20.654 [2024-12-10 04:05:14.849692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:424c2000 sqhd:7210 p:0 m:0 dnr:0 00:17:20.654 [2024-12-10 04:05:14.867932] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:17:20.654 [2024-12-10 04:05:14.868115] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:17:20.654 [2024-12-10 04:05:14.868157] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:17:20.654 [2024-12-10 04:05:14.868193] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:17:20.654 [2024-12-10 04:05:14.868223] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:17:20.654 [2024-12-10 04:05:14.868255] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:17:20.654 [2024-12-10 04:05:14.868303] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:17:20.654 [2024-12-10 04:05:14.868333] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:17:20.654 [2024-12-10 04:05:14.868364] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:17:20.654 [2024-12-10 04:05:14.868398] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:17:20.654 [2024-12-10 04:05:14.868427] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:17:20.654 [2024-12-10 04:05:14.869109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:20.654 [2024-12-10 04:05:14.869131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:17:20.654 [2024-12-10 04:05:14.869142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:17:20.654 [2024-12-10 04:05:14.869152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:17:20.654 [2024-12-10 04:05:14.869162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:17:20.654 [2024-12-10 04:05:14.869524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:17:20.654 [2024-12-10 04:05:14.869539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:17:20.654 [2024-12-10 04:05:14.869549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:17:20.654 [2024-12-10 04:05:14.869560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:17:20.654 [2024-12-10 04:05:14.869569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:17:20.654 [2024-12-10 04:05:14.894816] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:20.654 [2024-12-10 04:05:14.894873] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:20.654 [2024-12-10 04:05:14.894893] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:17:20.654 [2024-12-10 04:05:14.894998] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:20.654 [2024-12-10 04:05:14.895024] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:20.654 [2024-12-10 04:05:14.895051] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170e3200 00:17:20.654 [2024-12-10 04:05:14.895114] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:20.654 [2024-12-10 04:05:14.895123] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:20.654 [2024-12-10 04:05:14.895130] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170d8d40 00:17:20.654 [2024-12-10 04:05:14.895206] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:20.654 [2024-12-10 04:05:14.895216] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:20.654 [2024-12-10 04:05:14.895222] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170cc0c0 00:17:20.654 [2024-12-10 04:05:14.895296] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:20.654 [2024-12-10 04:05:14.895306] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:20.654 [2024-12-10 04:05:14.895312] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170bf380 00:17:20.654 [2024-12-10 04:05:14.895423] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:20.654 [2024-12-10 04:05:14.895433] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:20.654 [2024-12-10 04:05:14.895440] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017052c40 00:17:20.654 [2024-12-10 04:05:14.895527] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:20.654 [2024-12-10 04:05:14.895537] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:20.654 [2024-12-10 04:05:14.895548] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001707e000 00:17:20.654 [2024-12-10 04:05:14.895629] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:20.654 [2024-12-10 04:05:14.895638] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:20.654 [2024-12-10 04:05:14.895645] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001708d2c0 00:17:20.654 [2024-12-10 04:05:14.895736] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:20.654 [2024-12-10 04:05:14.895745] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:20.654 [2024-12-10 04:05:14.895752] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017089e00 00:17:20.654 [2024-12-10 04:05:14.895818] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:20.654 [2024-12-10 04:05:14.895828] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:20.654 [2024-12-10 04:05:14.895834] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170cb580 00:17:20.654 task offset: 38912 on job bdev=Nvme1n1 fails 00:17:20.654 00:17:20.654 Latency(us) 00:17:20.654 [2024-12-10T03:05:15.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.654 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.654 Job: Nvme1n1 ended in about 1.84 seconds with error 00:17:20.654 Verification LBA range: start 0x0 length 0x400 00:17:20.654 Nvme1n1 : 1.84 151.83 9.49 34.70 0.00 339968.42 7767.23 1043915.66 00:17:20.654 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.654 Job: Nvme2n1 ended in about 1.85 seconds with error 00:17:20.654 Verification LBA range: start 0x0 length 0x400 00:17:20.654 Nvme2n1 : 1.85 147.41 9.21 34.69 0.00 345326.61 8980.86 1037701.88 00:17:20.654 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.654 Job: Nvme3n1 ended in about 1.85 seconds with error 00:17:20.654 Verification LBA range: start 0x0 length 0x400 00:17:20.654 Nvme3n1 : 1.85 156.00 9.75 34.67 0.00 327000.92 14272.28 1037701.88 00:17:20.654 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.654 Job: Nvme4n1 ended in about 1.85 seconds with error 00:17:20.654 Verification LBA range: start 0x0 length 0x400 00:17:20.654 Nvme4n1 : 1.85 159.71 9.98 34.65 0.00 318026.84 5291.43 1031488.09 00:17:20.654 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.654 Job: Nvme5n1 ended in about 1.85 seconds with error 00:17:20.654 Verification LBA range: start 0x0 length 0x400 00:17:20.654 Nvme5n1 : 1.85 147.19 9.20 34.63 0.00 337067.71 28156.21 1031488.09 00:17:20.654 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.654 Job: Nvme6n1 ended in about 1.85 seconds with error 00:17:20.654 Verification LBA range: start 0x0 length 0x400 00:17:20.654 Nvme6n1 : 1.85 154.68 9.67 34.61 0.00 321088.95 32622.36 1025274.31 00:17:20.654 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.654 Job: Nvme7n1 ended in about 1.85 seconds with error 00:17:20.655 Verification LBA range: start 0x0 length 0x400 00:17:20.655 Nvme7n1 : 1.85 154.06 9.63 34.60 0.00 319584.50 40972.14 1025274.31 00:17:20.655 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.655 Job: Nvme8n1 ended in about 1.85 seconds with error 00:17:20.655 Verification LBA range: start 0x0 length 0x400 00:17:20.655 Nvme8n1 : 1.85 151.82 9.49 34.58 0.00 320783.21 47768.46 1025274.31 00:17:20.655 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.655 Job: Nvme9n1 ended in about 1.85 seconds with error 00:17:20.655 Verification LBA range: start 0x0 length 0x400 00:17:20.655 Nvme9n1 : 1.85 146.88 9.18 34.56 0.00 326655.64 36894.34 1025274.31 00:17:20.655 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.655 Job: Nvme10n1 ended in about 1.81 seconds with error 00:17:20.655 Verification LBA range: start 0x0 length 0x400 00:17:20.655 Nvme10n1 : 1.81 106.16 6.63 35.39 0.00 417297.45 58642.58 1062557.01 00:17:20.655 [2024-12-10T03:05:15.044Z] =================================================================================================================== 00:17:20.655 [2024-12-10T03:05:15.044Z] Total : 1475.75 92.23 347.07 0.00 335022.41 5291.43 1062557.01 00:17:20.655 [2024-12-10 04:05:14.918131] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:20.911 04:05:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 787571 00:17:20.911 04:05:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:17:20.911 04:05:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 787571 00:17:20.911 04:05:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:17:20.911 04:05:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.911 04:05:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:17:20.911 04:05:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.911 04:05:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 787571 00:17:21.843 [2024-12-10 04:05:15.898943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:21.843 [2024-12-10 04:05:15.899000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:21.843 [2024-12-10 04:05:15.900746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:21.843 [2024-12-10 04:05:15.900781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:17:21.843 [2024-12-10 04:05:15.902710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:21.843 [2024-12-10 04:05:15.902743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:17:21.843 [2024-12-10 04:05:15.904600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:21.843 [2024-12-10 04:05:15.904635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:17:21.843 [2024-12-10 04:05:15.906252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:21.843 [2024-12-10 04:05:15.906296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:17:21.843 [2024-12-10 04:05:15.907789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:21.843 [2024-12-10 04:05:15.907832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:17:21.843 [2024-12-10 04:05:15.909541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:21.843 [2024-12-10 04:05:15.909553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:17:21.843 [2024-12-10 04:05:15.910832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:21.843 [2024-12-10 04:05:15.910873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:17:21.843 [2024-12-10 04:05:15.912357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:21.843 [2024-12-10 04:05:15.912388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:17:21.843 [2024-12-10 04:05:15.913594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:21.843 [2024-12-10 04:05:15.913611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:17:21.843 [2024-12-10 04:05:15.913619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:21.843 [2024-12-10 04:05:15.913628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:21.843 [2024-12-10 04:05:15.913637] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:17:21.843 [2024-12-10 04:05:15.913649] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:21.843 [2024-12-10 04:05:15.913663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:17:21.843 [2024-12-10 04:05:15.913670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:17:21.843 [2024-12-10 04:05:15.913678] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:17:21.843 [2024-12-10 04:05:15.913686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:17:21.843 [2024-12-10 04:05:15.913696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:17:21.843 [2024-12-10 04:05:15.913704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:17:21.843 [2024-12-10 04:05:15.913711] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:17:21.843 [2024-12-10 04:05:15.913719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:17:21.843 [2024-12-10 04:05:15.913731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:17:21.843 [2024-12-10 04:05:15.913738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:17:21.843 [2024-12-10 04:05:15.913746] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:17:21.843 [2024-12-10 04:05:15.913754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:17:21.843 [2024-12-10 04:05:15.913764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:17:21.843 [2024-12-10 04:05:15.913771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:17:21.843 [2024-12-10 04:05:15.913779] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:17:21.843 [2024-12-10 04:05:15.913786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:17:21.843 [2024-12-10 04:05:15.913899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:17:21.843 [2024-12-10 04:05:15.913910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:17:21.843 [2024-12-10 04:05:15.913918] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:17:21.843 [2024-12-10 04:05:15.913930] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:17:21.843 [2024-12-10 04:05:15.913940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:17:21.843 [2024-12-10 04:05:15.913948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:17:21.843 [2024-12-10 04:05:15.913956] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:17:21.843 [2024-12-10 04:05:15.913964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:17:21.843 [2024-12-10 04:05:15.913973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:17:21.843 [2024-12-10 04:05:15.913982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:17:21.843 [2024-12-10 04:05:15.913990] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:17:21.844 [2024-12-10 04:05:15.913997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:17:21.844 [2024-12-10 04:05:15.914008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:17:21.844 [2024-12-10 04:05:15.914015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:17:21.844 [2024-12-10 04:05:15.914022] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:17:21.844 [2024-12-10 04:05:15.914031] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:17:21.844 [2024-12-10 04:05:15.914040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:17:21.844 [2024-12-10 04:05:15.914047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:17:21.844 [2024-12-10 04:05:15.914055] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:17:21.844 [2024-12-10 04:05:15.914064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:21.844 rmmod nvme_rdma 00:17:21.844 rmmod nvme_fabrics 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 787267 ']' 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 787267 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 787267 ']' 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 787267 00:17:21.844 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (787267) - No such process 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 787267 is not found' 00:17:21.844 Process with pid 787267 is not found 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:21.844 00:17:21.844 real 0m5.247s 00:17:21.844 user 0m15.343s 00:17:21.844 sys 0m1.102s 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:21.844 ************************************ 00:17:21.844 END TEST nvmf_shutdown_tc3 00:17:21.844 ************************************ 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:21.844 ************************************ 00:17:21.844 START TEST nvmf_shutdown_tc4 00:17:21.844 ************************************ 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:21.844 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:21.845 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:21.845 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:21.845 Found net devices under 0000:18:00.0: mlx_0_0 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:21.845 Found net devices under 0000:18:00.1: mlx_0_1 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:21.845 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:22.104 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:22.104 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:22.104 altname enp24s0f0np0 00:17:22.104 altname ens785f0np0 00:17:22.104 inet 192.168.100.8/24 scope global mlx_0_0 00:17:22.104 valid_lft forever preferred_lft forever 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:22.104 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:22.105 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:22.105 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:22.105 altname enp24s0f1np1 00:17:22.105 altname ens785f1np1 00:17:22.105 inet 192.168.100.9/24 scope global mlx_0_1 00:17:22.105 valid_lft forever preferred_lft forever 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:22.105 192.168.100.9' 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:22.105 192.168.100.9' 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:22.105 192.168.100.9' 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=788479 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 788479 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 788479 ']' 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.105 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:22.105 [2024-12-10 04:05:16.466900] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:22.105 [2024-12-10 04:05:16.466950] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.440 [2024-12-10 04:05:16.527601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:22.440 [2024-12-10 04:05:16.567441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.440 [2024-12-10 04:05:16.567476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.440 [2024-12-10 04:05:16.567482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.440 [2024-12-10 04:05:16.567487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.440 [2024-12-10 04:05:16.567491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.440 [2024-12-10 04:05:16.568746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.440 [2024-12-10 04:05:16.568831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:22.440 [2024-12-10 04:05:16.568938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.440 [2024-12-10 04:05:16.568939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:22.440 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.440 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:17:22.440 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:22.440 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:22.440 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:22.440 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.440 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:22.440 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.440 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:22.440 [2024-12-10 04:05:16.731526] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d213c0/0x1d258b0) succeed. 00:17:22.440 [2024-12-10 04:05:16.739770] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d22a50/0x1d66f50) succeed. 00:17:22.697 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.697 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:17:22.697 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.698 04:05:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:22.698 Malloc1 00:17:22.698 [2024-12-10 04:05:16.956555] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:22.698 Malloc2 00:17:22.698 Malloc3 00:17:22.698 Malloc4 00:17:22.955 Malloc5 00:17:22.955 Malloc6 00:17:22.955 Malloc7 00:17:22.955 Malloc8 00:17:22.955 Malloc9 00:17:22.955 Malloc10 00:17:23.212 04:05:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.212 04:05:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:17:23.212 04:05:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:23.212 04:05:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:23.212 04:05:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=788743 00:17:23.212 04:05:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:17:23.212 04:05:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:17:23.212 [2024-12-10 04:05:17.468876] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:28.472 04:05:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:28.472 04:05:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 788479 00:17:28.472 04:05:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 788479 ']' 00:17:28.472 04:05:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 788479 00:17:28.472 04:05:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:17:28.472 04:05:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.472 04:05:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 788479 00:17:28.472 04:05:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:28.472 04:05:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:28.472 04:05:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 788479' 00:17:28.472 killing process with pid 788479 00:17:28.472 04:05:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 788479 00:17:28.472 04:05:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 788479 00:17:28.472 NVMe io qpair process completion error 00:17:28.472 NVMe io qpair process completion error 00:17:28.472 NVMe io qpair process completion error 00:17:28.472 NVMe io qpair process completion error 00:17:28.472 NVMe io qpair process completion error 00:17:28.472 NVMe io qpair process completion error 00:17:28.729 04:05:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 starting I/O failed: -6 00:17:29.293 [2024-12-10 04:05:23.537264] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.293 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 [2024-12-10 04:05:23.548220] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 [2024-12-10 04:05:23.558833] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.294 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 [2024-12-10 04:05:23.568938] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.295 Write completed with error (sct=0, sc=8) 00:17:29.295 starting I/O failed: -6 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 starting I/O failed: -6 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 starting I/O failed: -6 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 starting I/O failed: -6 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 starting I/O failed: -6 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 starting I/O failed: -6 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 [2024-12-10 04:05:23.579655] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 Write completed with error (sct=0, sc=8) 00:17:29.296 NVMe io qpair process completion error 00:17:29.296 NVMe io qpair process completion error 00:17:29.296 NVMe io qpair process completion error 00:17:29.296 NVMe io qpair process completion error 00:17:29.296 NVMe io qpair process completion error 00:17:29.860 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 788743 00:17:29.860 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:17:29.860 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 788743 00:17:29.860 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:17:29.860 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.860 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:17:29.860 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.860 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 788743 00:17:30.427 [2024-12-10 04:05:24.582604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:30.427 [2024-12-10 04:05:24.582662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:17:30.427 [2024-12-10 04:05:24.584316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:30.427 [2024-12-10 04:05:24.584352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:17:30.427 [2024-12-10 04:05:24.586697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:30.427 [2024-12-10 04:05:24.586738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:17:30.427 [2024-12-10 04:05:24.588978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:30.427 [2024-12-10 04:05:24.589011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:17:30.428 [2024-12-10 04:05:24.591178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:30.428 [2024-12-10 04:05:24.591210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 [2024-12-10 04:05:24.593650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:30.428 [2024-12-10 04:05:24.593691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:17:30.428 [2024-12-10 04:05:24.593718] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 [2024-12-10 04:05:24.596000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:30.428 [2024-12-10 04:05:24.596031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 [2024-12-10 04:05:24.598586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:30.428 [2024-12-10 04:05:24.598619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 [2024-12-10 04:05:24.600975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:30.428 [2024-12-10 04:05:24.601008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 [2024-12-10 04:05:24.603839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:30.428 [2024-12-10 04:05:24.603871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.428 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.429 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.430 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Write completed with error (sct=0, sc=8) 00:17:30.431 Initializing NVMe Controllers 00:17:30.431 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:17:30.431 Controller IO queue size 128, less than required. 00:17:30.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.431 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:17:30.431 Controller IO queue size 128, less than required. 00:17:30.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.431 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:17:30.431 Controller IO queue size 128, less than required. 00:17:30.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.431 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:17:30.431 Controller IO queue size 128, less than required. 00:17:30.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.431 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:17:30.431 Controller IO queue size 128, less than required. 00:17:30.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.431 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:17:30.431 Controller IO queue size 128, less than required. 00:17:30.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.431 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:17:30.431 Controller IO queue size 128, less than required. 00:17:30.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.431 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:30.431 Controller IO queue size 128, less than required. 00:17:30.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.431 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:17:30.431 Controller IO queue size 128, less than required. 00:17:30.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.431 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:17:30.431 Controller IO queue size 128, less than required. 00:17:30.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:17:30.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:17:30.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:17:30.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:17:30.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:17:30.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:17:30.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:17:30.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:30.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:17:30.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:17:30.431 Initialization complete. Launching workers. 00:17:30.431 ======================================================== 00:17:30.431 Latency(us) 00:17:30.431 Device Information : IOPS MiB/s Average min max 00:17:30.431 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1603.03 68.88 79555.60 103.54 1226971.63 00:17:30.431 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1603.03 68.88 79273.23 106.07 1189661.53 00:17:30.431 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1619.18 69.57 92409.44 107.67 2216518.44 00:17:30.431 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1612.62 69.29 92918.99 108.19 2239291.37 00:17:30.431 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1625.24 69.83 92321.74 111.45 2239848.20 00:17:30.431 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1592.26 68.42 79801.27 108.10 1202973.61 00:17:30.431 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1657.89 71.24 90547.31 102.66 2073930.84 00:17:30.431 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1584.52 68.08 80246.69 108.04 1212862.22 00:17:30.431 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1590.75 68.35 80023.73 107.01 1221496.90 00:17:30.431 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1628.94 69.99 92108.77 107.99 2231493.85 00:17:30.431 ======================================================== 00:17:30.431 Total : 16117.46 692.55 85980.97 102.66 2239848.20 00:17:30.431 00:17:30.431 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:30.431 rmmod nvme_rdma 00:17:30.431 rmmod nvme_fabrics 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 788479 ']' 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 788479 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 788479 ']' 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 788479 00:17:30.431 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (788479) - No such process 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 788479 is not found' 00:17:30.431 Process with pid 788479 is not found 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:30.431 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:30.431 00:17:30.431 real 0m8.539s 00:17:30.431 user 0m31.944s 00:17:30.431 sys 0m1.104s 00:17:30.432 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.432 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:30.432 ************************************ 00:17:30.432 END TEST nvmf_shutdown_tc4 00:17:30.432 ************************************ 00:17:30.432 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:17:30.432 00:17:30.432 real 0m30.845s 00:17:30.432 user 1m33.611s 00:17:30.432 sys 0m8.851s 00:17:30.432 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.432 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:30.432 ************************************ 00:17:30.432 END TEST nvmf_shutdown 00:17:30.432 ************************************ 00:17:30.432 04:05:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:17:30.432 04:05:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:30.432 04:05:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.432 04:05:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:30.690 ************************************ 00:17:30.690 START TEST nvmf_nsid 00:17:30.690 ************************************ 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:17:30.690 * Looking for test storage... 00:17:30.690 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:30.690 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.691 04:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:30.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.691 --rc genhtml_branch_coverage=1 00:17:30.691 --rc genhtml_function_coverage=1 00:17:30.691 --rc genhtml_legend=1 00:17:30.691 --rc geninfo_all_blocks=1 00:17:30.691 --rc geninfo_unexecuted_blocks=1 00:17:30.691 00:17:30.691 ' 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:30.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.691 --rc genhtml_branch_coverage=1 00:17:30.691 --rc genhtml_function_coverage=1 00:17:30.691 --rc genhtml_legend=1 00:17:30.691 --rc geninfo_all_blocks=1 00:17:30.691 --rc geninfo_unexecuted_blocks=1 00:17:30.691 00:17:30.691 ' 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:30.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.691 --rc genhtml_branch_coverage=1 00:17:30.691 --rc genhtml_function_coverage=1 00:17:30.691 --rc genhtml_legend=1 00:17:30.691 --rc geninfo_all_blocks=1 00:17:30.691 --rc geninfo_unexecuted_blocks=1 00:17:30.691 00:17:30.691 ' 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:30.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.691 --rc genhtml_branch_coverage=1 00:17:30.691 --rc genhtml_function_coverage=1 00:17:30.691 --rc genhtml_legend=1 00:17:30.691 --rc geninfo_all_blocks=1 00:17:30.691 --rc geninfo_unexecuted_blocks=1 00:17:30.691 00:17:30.691 ' 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:30.691 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:30.691 04:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:37.252 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:37.252 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:37.252 Found net devices under 0000:18:00.0: mlx_0_0 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:37.252 Found net devices under 0000:18:00.1: mlx_0_1 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:37.252 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:37.253 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:37.253 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:37.253 altname enp24s0f0np0 00:17:37.253 altname ens785f0np0 00:17:37.253 inet 192.168.100.8/24 scope global mlx_0_0 00:17:37.253 valid_lft forever preferred_lft forever 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:37.253 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:37.253 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:37.253 altname enp24s0f1np1 00:17:37.253 altname ens785f1np1 00:17:37.253 inet 192.168.100.9/24 scope global mlx_0_1 00:17:37.253 valid_lft forever preferred_lft forever 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:37.253 192.168.100.9' 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:37.253 192.168.100.9' 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:37.253 192.168.100.9' 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=793292 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 793292 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 793292 ']' 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:37.253 [2024-12-10 04:05:30.738608] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:37.253 [2024-12-10 04:05:30.738659] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.253 [2024-12-10 04:05:30.796787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.253 [2024-12-10 04:05:30.835142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.253 [2024-12-10 04:05:30.835175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.253 [2024-12-10 04:05:30.835181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.253 [2024-12-10 04:05:30.835187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.253 [2024-12-10 04:05:30.835191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.253 [2024-12-10 04:05:30.835660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=793382 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=666ab414-deea-4ab1-a74a-c866a7b44306 00:17:37.253 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:37.254 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=c5d80c64-20ee-4791-b1a3-350ebe1665bf 00:17:37.254 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:37.254 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=a87491d2-d4ca-4098-8494-774bc1d61379 00:17:37.254 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:37.254 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.254 04:05:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:37.254 null0 00:17:37.254 null1 00:17:37.254 null2 00:17:37.254 [2024-12-10 04:05:31.010918] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:37.254 [2024-12-10 04:05:31.010956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793382 ] 00:17:37.254 [2024-12-10 04:05:31.036993] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6dc8d0/0x6ed0b0) succeed. 00:17:37.254 [2024-12-10 04:05:31.045322] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6ddd80/0x76d140) succeed. 00:17:37.254 [2024-12-10 04:05:31.068554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.254 [2024-12-10 04:05:31.093411] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:37.254 [2024-12-10 04:05:31.107211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.254 04:05:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.254 04:05:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 793382 /var/tmp/tgt2.sock 00:17:37.254 04:05:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 793382 ']' 00:17:37.254 04:05:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:37.254 04:05:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.254 04:05:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:37.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:37.254 04:05:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.254 04:05:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:37.254 04:05:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.254 04:05:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:37.254 04:05:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:37.254 [2024-12-10 04:05:31.631347] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfaa8b0/0xf258c0) succeed. 00:17:37.511 [2024-12-10 04:05:31.640259] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10f9d90/0xf66f60) succeed. 00:17:37.511 [2024-12-10 04:05:31.681022] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:17:37.511 nvme0n1 nvme0n2 00:17:37.511 nvme1n1 00:17:37.511 04:05:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:37.511 04:05:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:37.511 04:05:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 666ab414-deea-4ab1-a74a-c866a7b44306 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=666ab414deea4ab1a74ac866a7b44306 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 666AB414DEEA4AB1A74AC866A7B44306 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 666AB414DEEA4AB1A74AC866A7B44306 == \6\6\6\A\B\4\1\4\D\E\E\A\4\A\B\1\A\7\4\A\C\8\6\6\A\7\B\4\4\3\0\6 ]] 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid c5d80c64-20ee-4791-b1a3-350ebe1665bf 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c5d80c6420ee4791b1a3350ebe1665bf 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C5D80C6420EE4791B1A3350EBE1665BF 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ C5D80C6420EE4791B1A3350EBE1665BF == \C\5\D\8\0\C\6\4\2\0\E\E\4\7\9\1\B\1\A\3\3\5\0\E\B\E\1\6\6\5\B\F ]] 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid a87491d2-d4ca-4098-8494-774bc1d61379 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a87491d2d4ca40988494774bc1d61379 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A87491D2D4CA40988494774BC1D61379 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ A87491D2D4CA40988494774BC1D61379 == \A\8\7\4\9\1\D\2\D\4\C\A\4\0\9\8\8\4\9\4\7\7\4\B\C\1\D\6\1\3\7\9 ]] 00:17:45.607 04:05:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:52.188 04:05:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:52.188 04:05:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:52.188 04:05:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 793382 00:17:52.188 04:05:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 793382 ']' 00:17:52.188 04:05:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 793382 00:17:52.188 04:05:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:52.188 04:05:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.188 04:05:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 793382 00:17:52.188 04:05:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:52.188 04:05:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:52.188 04:05:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 793382' 00:17:52.188 killing process with pid 793382 00:17:52.188 04:05:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 793382 00:17:52.188 04:05:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 793382 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:52.188 rmmod nvme_rdma 00:17:52.188 rmmod nvme_fabrics 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 793292 ']' 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 793292 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 793292 ']' 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 793292 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 793292 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 793292' 00:17:52.188 killing process with pid 793292 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 793292 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 793292 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:52.188 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:52.188 00:17:52.188 real 0m21.660s 00:17:52.188 user 0m32.334s 00:17:52.188 sys 0m5.391s 00:17:52.189 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.189 04:05:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:52.189 ************************************ 00:17:52.189 END TEST nvmf_nsid 00:17:52.189 ************************************ 00:17:52.189 04:05:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:52.189 00:17:52.189 real 7m8.345s 00:17:52.189 user 17m12.479s 00:17:52.189 sys 1m54.038s 00:17:52.189 04:05:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.189 04:05:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.189 ************************************ 00:17:52.189 END TEST nvmf_target_extra 00:17:52.189 ************************************ 00:17:52.446 04:05:46 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:17:52.446 04:05:46 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:52.446 04:05:46 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.446 04:05:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:52.446 ************************************ 00:17:52.446 START TEST nvmf_host 00:17:52.446 ************************************ 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:17:52.446 * Looking for test storage... 00:17:52.446 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:52.446 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:52.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.447 --rc genhtml_branch_coverage=1 00:17:52.447 --rc genhtml_function_coverage=1 00:17:52.447 --rc genhtml_legend=1 00:17:52.447 --rc geninfo_all_blocks=1 00:17:52.447 --rc geninfo_unexecuted_blocks=1 00:17:52.447 00:17:52.447 ' 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:52.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.447 --rc genhtml_branch_coverage=1 00:17:52.447 --rc genhtml_function_coverage=1 00:17:52.447 --rc genhtml_legend=1 00:17:52.447 --rc geninfo_all_blocks=1 00:17:52.447 --rc geninfo_unexecuted_blocks=1 00:17:52.447 00:17:52.447 ' 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:52.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.447 --rc genhtml_branch_coverage=1 00:17:52.447 --rc genhtml_function_coverage=1 00:17:52.447 --rc genhtml_legend=1 00:17:52.447 --rc geninfo_all_blocks=1 00:17:52.447 --rc geninfo_unexecuted_blocks=1 00:17:52.447 00:17:52.447 ' 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:52.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.447 --rc genhtml_branch_coverage=1 00:17:52.447 --rc genhtml_function_coverage=1 00:17:52.447 --rc genhtml_legend=1 00:17:52.447 --rc geninfo_all_blocks=1 00:17:52.447 --rc geninfo_unexecuted_blocks=1 00:17:52.447 00:17:52.447 ' 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.447 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.447 04:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.705 ************************************ 00:17:52.705 START TEST nvmf_multicontroller 00:17:52.705 ************************************ 00:17:52.705 04:05:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:17:52.705 * Looking for test storage... 00:17:52.705 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:52.705 04:05:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:52.705 04:05:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:52.705 04:05:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:17:52.705 04:05:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:52.705 04:05:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:52.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.705 --rc genhtml_branch_coverage=1 00:17:52.705 --rc genhtml_function_coverage=1 00:17:52.705 --rc genhtml_legend=1 00:17:52.705 --rc geninfo_all_blocks=1 00:17:52.705 --rc geninfo_unexecuted_blocks=1 00:17:52.705 00:17:52.705 ' 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:52.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.705 --rc genhtml_branch_coverage=1 00:17:52.705 --rc genhtml_function_coverage=1 00:17:52.705 --rc genhtml_legend=1 00:17:52.705 --rc geninfo_all_blocks=1 00:17:52.705 --rc geninfo_unexecuted_blocks=1 00:17:52.705 00:17:52.705 ' 00:17:52.705 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:52.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.705 --rc genhtml_branch_coverage=1 00:17:52.705 --rc genhtml_function_coverage=1 00:17:52.706 --rc genhtml_legend=1 00:17:52.706 --rc geninfo_all_blocks=1 00:17:52.706 --rc geninfo_unexecuted_blocks=1 00:17:52.706 00:17:52.706 ' 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:52.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.706 --rc genhtml_branch_coverage=1 00:17:52.706 --rc genhtml_function_coverage=1 00:17:52.706 --rc genhtml_legend=1 00:17:52.706 --rc geninfo_all_blocks=1 00:17:52.706 --rc geninfo_unexecuted_blocks=1 00:17:52.706 00:17:52.706 ' 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.706 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:17:52.706 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:17:52.706 00:17:52.706 real 0m0.204s 00:17:52.706 user 0m0.117s 00:17:52.706 sys 0m0.096s 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:52.706 ************************************ 00:17:52.706 END TEST nvmf_multicontroller 00:17:52.706 ************************************ 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.706 04:05:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.964 ************************************ 00:17:52.964 START TEST nvmf_aer 00:17:52.964 ************************************ 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:17:52.964 * Looking for test storage... 00:17:52.964 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:17:52.964 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:52.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.965 --rc genhtml_branch_coverage=1 00:17:52.965 --rc genhtml_function_coverage=1 00:17:52.965 --rc genhtml_legend=1 00:17:52.965 --rc geninfo_all_blocks=1 00:17:52.965 --rc geninfo_unexecuted_blocks=1 00:17:52.965 00:17:52.965 ' 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:52.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.965 --rc genhtml_branch_coverage=1 00:17:52.965 --rc genhtml_function_coverage=1 00:17:52.965 --rc genhtml_legend=1 00:17:52.965 --rc geninfo_all_blocks=1 00:17:52.965 --rc geninfo_unexecuted_blocks=1 00:17:52.965 00:17:52.965 ' 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:52.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.965 --rc genhtml_branch_coverage=1 00:17:52.965 --rc genhtml_function_coverage=1 00:17:52.965 --rc genhtml_legend=1 00:17:52.965 --rc geninfo_all_blocks=1 00:17:52.965 --rc geninfo_unexecuted_blocks=1 00:17:52.965 00:17:52.965 ' 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:52.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.965 --rc genhtml_branch_coverage=1 00:17:52.965 --rc genhtml_function_coverage=1 00:17:52.965 --rc genhtml_legend=1 00:17:52.965 --rc geninfo_all_blocks=1 00:17:52.965 --rc geninfo_unexecuted_blocks=1 00:17:52.965 00:17:52.965 ' 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.965 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:17:52.965 04:05:47 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.523 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.523 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:59.524 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:59.524 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:59.524 Found net devices under 0000:18:00.0: mlx_0_0 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:59.524 Found net devices under 0000:18:00.1: mlx_0_1 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:59.524 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:59.525 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:59.525 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:59.525 altname enp24s0f0np0 00:17:59.525 altname ens785f0np0 00:17:59.525 inet 192.168.100.8/24 scope global mlx_0_0 00:17:59.525 valid_lft forever preferred_lft forever 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:59.525 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:59.525 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:59.525 altname enp24s0f1np1 00:17:59.525 altname ens785f1np1 00:17:59.525 inet 192.168.100.9/24 scope global mlx_0_1 00:17:59.525 valid_lft forever preferred_lft forever 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:59.525 192.168.100.9' 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:59.525 192.168.100.9' 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:59.525 192.168.100.9' 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=799603 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 799603 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 799603 ']' 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.525 04:05:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:59.525 [2024-12-10 04:05:52.963097] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:17:59.525 [2024-12-10 04:05:52.963145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.525 [2024-12-10 04:05:53.020970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:59.525 [2024-12-10 04:05:53.061344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.525 [2024-12-10 04:05:53.061378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.525 [2024-12-10 04:05:53.061385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.525 [2024-12-10 04:05:53.061391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.525 [2024-12-10 04:05:53.061395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.525 [2024-12-10 04:05:53.062632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.525 [2024-12-10 04:05:53.062724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.525 [2024-12-10 04:05:53.062817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.525 [2024-12-10 04:05:53.062818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.525 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.525 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:17:59.525 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:59.525 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.525 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.526 [2024-12-10 04:05:53.221519] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17f00c0/0x17f45b0) succeed. 00:17:59.526 [2024-12-10 04:05:53.230253] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17f1750/0x1835c50) succeed. 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.526 Malloc0 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.526 [2024-12-10 04:05:53.406326] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.526 [ 00:17:59.526 { 00:17:59.526 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:59.526 "subtype": "Discovery", 00:17:59.526 "listen_addresses": [], 00:17:59.526 "allow_any_host": true, 00:17:59.526 "hosts": [] 00:17:59.526 }, 00:17:59.526 { 00:17:59.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.526 "subtype": "NVMe", 00:17:59.526 "listen_addresses": [ 00:17:59.526 { 00:17:59.526 "trtype": "RDMA", 00:17:59.526 "adrfam": "IPv4", 00:17:59.526 "traddr": "192.168.100.8", 00:17:59.526 "trsvcid": "4420" 00:17:59.526 } 00:17:59.526 ], 00:17:59.526 "allow_any_host": true, 00:17:59.526 "hosts": [], 00:17:59.526 "serial_number": "SPDK00000000000001", 00:17:59.526 "model_number": "SPDK bdev Controller", 00:17:59.526 "max_namespaces": 2, 00:17:59.526 "min_cntlid": 1, 00:17:59.526 "max_cntlid": 65519, 00:17:59.526 "namespaces": [ 00:17:59.526 { 00:17:59.526 "nsid": 1, 00:17:59.526 "bdev_name": "Malloc0", 00:17:59.526 "name": "Malloc0", 00:17:59.526 "nguid": "16D997277E2746998667160DF0503A5D", 00:17:59.526 "uuid": "16d99727-7e27-4699-8667-160df0503a5d" 00:17:59.526 } 00:17:59.526 ] 00:17:59.526 } 00:17:59.526 ] 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=799729 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.526 Malloc1 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.526 [ 00:17:59.526 { 00:17:59.526 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:59.526 "subtype": "Discovery", 00:17:59.526 "listen_addresses": [], 00:17:59.526 "allow_any_host": true, 00:17:59.526 "hosts": [] 00:17:59.526 }, 00:17:59.526 { 00:17:59.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.526 "subtype": "NVMe", 00:17:59.526 "listen_addresses": [ 00:17:59.526 { 00:17:59.526 "trtype": "RDMA", 00:17:59.526 "adrfam": "IPv4", 00:17:59.526 "traddr": "192.168.100.8", 00:17:59.526 "trsvcid": "4420" 00:17:59.526 } 00:17:59.526 ], 00:17:59.526 "allow_any_host": true, 00:17:59.526 "hosts": [], 00:17:59.526 "serial_number": "SPDK00000000000001", 00:17:59.526 "model_number": "SPDK bdev Controller", 00:17:59.526 "max_namespaces": 2, 00:17:59.526 "min_cntlid": 1, 00:17:59.526 "max_cntlid": 65519, 00:17:59.526 "namespaces": [ 00:17:59.526 { 00:17:59.526 "nsid": 1, 00:17:59.526 "bdev_name": "Malloc0", 00:17:59.526 "name": "Malloc0", 00:17:59.526 "nguid": "16D997277E2746998667160DF0503A5D", 00:17:59.526 "uuid": "16d99727-7e27-4699-8667-160df0503a5d" 00:17:59.526 }, 00:17:59.526 { 00:17:59.526 "nsid": 2, 00:17:59.526 "bdev_name": "Malloc1", 00:17:59.526 "name": "Malloc1", 00:17:59.526 "nguid": "B48537FC8EB54FD2AB8706F8DB6C7CB8", 00:17:59.526 "uuid": "b48537fc-8eb5-4fd2-ab87-06f8db6c7cb8" 00:17:59.526 } 00:17:59.526 ] 00:17:59.526 } 00:17:59.526 ] 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 799729 00:17:59.526 Asynchronous Event Request test 00:17:59.526 Attaching to 192.168.100.8 00:17:59.526 Attached to 192.168.100.8 00:17:59.526 Registering asynchronous event callbacks... 00:17:59.526 Starting namespace attribute notice tests for all controllers... 00:17:59.526 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:59.526 aer_cb - Changed Namespace 00:17:59.526 Cleaning up... 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.526 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:59.527 rmmod nvme_rdma 00:17:59.527 rmmod nvme_fabrics 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 799603 ']' 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 799603 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 799603 ']' 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 799603 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 799603 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 799603' 00:17:59.527 killing process with pid 799603 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 799603 00:17:59.527 04:05:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 799603 00:17:59.787 04:05:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:59.787 04:05:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:59.787 00:17:59.787 real 0m6.992s 00:17:59.787 user 0m5.607s 00:17:59.787 sys 0m4.707s 00:17:59.787 04:05:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.787 04:05:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:59.787 ************************************ 00:17:59.787 END TEST nvmf_aer 00:17:59.787 ************************************ 00:17:59.787 04:05:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:17:59.787 04:05:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:59.787 04:05:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.787 04:05:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.787 ************************************ 00:17:59.787 START TEST nvmf_async_init 00:17:59.787 ************************************ 00:17:59.787 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:18:00.045 * Looking for test storage... 00:18:00.045 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:00.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.045 --rc genhtml_branch_coverage=1 00:18:00.045 --rc genhtml_function_coverage=1 00:18:00.045 --rc genhtml_legend=1 00:18:00.045 --rc geninfo_all_blocks=1 00:18:00.045 --rc geninfo_unexecuted_blocks=1 00:18:00.045 00:18:00.045 ' 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:00.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.045 --rc genhtml_branch_coverage=1 00:18:00.045 --rc genhtml_function_coverage=1 00:18:00.045 --rc genhtml_legend=1 00:18:00.045 --rc geninfo_all_blocks=1 00:18:00.045 --rc geninfo_unexecuted_blocks=1 00:18:00.045 00:18:00.045 ' 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:00.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.045 --rc genhtml_branch_coverage=1 00:18:00.045 --rc genhtml_function_coverage=1 00:18:00.045 --rc genhtml_legend=1 00:18:00.045 --rc geninfo_all_blocks=1 00:18:00.045 --rc geninfo_unexecuted_blocks=1 00:18:00.045 00:18:00.045 ' 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:00.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.045 --rc genhtml_branch_coverage=1 00:18:00.045 --rc genhtml_function_coverage=1 00:18:00.045 --rc genhtml_legend=1 00:18:00.045 --rc geninfo_all_blocks=1 00:18:00.045 --rc geninfo_unexecuted_blocks=1 00:18:00.045 00:18:00.045 ' 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.045 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.046 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b5939e51daac40369b567511023f264a 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:18:00.046 04:05:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:06.602 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:06.602 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:06.602 Found net devices under 0000:18:00.0: mlx_0_0 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:06.602 Found net devices under 0000:18:00.1: mlx_0_1 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:06.602 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:06.603 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:06.603 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:06.603 altname enp24s0f0np0 00:18:06.603 altname ens785f0np0 00:18:06.603 inet 192.168.100.8/24 scope global mlx_0_0 00:18:06.603 valid_lft forever preferred_lft forever 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:06.603 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:06.603 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:06.603 altname enp24s0f1np1 00:18:06.603 altname ens785f1np1 00:18:06.603 inet 192.168.100.9/24 scope global mlx_0_1 00:18:06.603 valid_lft forever preferred_lft forever 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:06.603 04:05:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:06.603 192.168.100.9' 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:06.603 192.168.100.9' 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:06.603 192.168.100.9' 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=803033 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 803033 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 803033 ']' 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.603 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.603 [2024-12-10 04:06:00.134944] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:06.603 [2024-12-10 04:06:00.134989] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.603 [2024-12-10 04:06:00.193765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.603 [2024-12-10 04:06:00.233410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.603 [2024-12-10 04:06:00.233447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.604 [2024-12-10 04:06:00.233455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.604 [2024-12-10 04:06:00.233460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.604 [2024-12-10 04:06:00.233465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.604 [2024-12-10 04:06:00.233930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.604 [2024-12-10 04:06:00.388104] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf14dc0/0xf192b0) succeed. 00:18:06.604 [2024-12-10 04:06:00.396153] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf16270/0xf5a950) succeed. 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.604 null0 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b5939e51daac40369b567511023f264a 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.604 [2024-12-10 04:06:00.467364] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.604 nvme0n1 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.604 [ 00:18:06.604 { 00:18:06.604 "name": "nvme0n1", 00:18:06.604 "aliases": [ 00:18:06.604 "b5939e51-daac-4036-9b56-7511023f264a" 00:18:06.604 ], 00:18:06.604 "product_name": "NVMe disk", 00:18:06.604 "block_size": 512, 00:18:06.604 "num_blocks": 2097152, 00:18:06.604 "uuid": "b5939e51-daac-4036-9b56-7511023f264a", 00:18:06.604 "numa_id": 0, 00:18:06.604 "assigned_rate_limits": { 00:18:06.604 "rw_ios_per_sec": 0, 00:18:06.604 "rw_mbytes_per_sec": 0, 00:18:06.604 "r_mbytes_per_sec": 0, 00:18:06.604 "w_mbytes_per_sec": 0 00:18:06.604 }, 00:18:06.604 "claimed": false, 00:18:06.604 "zoned": false, 00:18:06.604 "supported_io_types": { 00:18:06.604 "read": true, 00:18:06.604 "write": true, 00:18:06.604 "unmap": false, 00:18:06.604 "flush": true, 00:18:06.604 "reset": true, 00:18:06.604 "nvme_admin": true, 00:18:06.604 "nvme_io": true, 00:18:06.604 "nvme_io_md": false, 00:18:06.604 "write_zeroes": true, 00:18:06.604 "zcopy": false, 00:18:06.604 "get_zone_info": false, 00:18:06.604 "zone_management": false, 00:18:06.604 "zone_append": false, 00:18:06.604 "compare": true, 00:18:06.604 "compare_and_write": true, 00:18:06.604 "abort": true, 00:18:06.604 "seek_hole": false, 00:18:06.604 "seek_data": false, 00:18:06.604 "copy": true, 00:18:06.604 "nvme_iov_md": false 00:18:06.604 }, 00:18:06.604 "memory_domains": [ 00:18:06.604 { 00:18:06.604 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:06.604 "dma_device_type": 0 00:18:06.604 } 00:18:06.604 ], 00:18:06.604 "driver_specific": { 00:18:06.604 "nvme": [ 00:18:06.604 { 00:18:06.604 "trid": { 00:18:06.604 "trtype": "RDMA", 00:18:06.604 "adrfam": "IPv4", 00:18:06.604 "traddr": "192.168.100.8", 00:18:06.604 "trsvcid": "4420", 00:18:06.604 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:06.604 }, 00:18:06.604 "ctrlr_data": { 00:18:06.604 "cntlid": 1, 00:18:06.604 "vendor_id": "0x8086", 00:18:06.604 "model_number": "SPDK bdev Controller", 00:18:06.604 "serial_number": "00000000000000000000", 00:18:06.604 "firmware_revision": "25.01", 00:18:06.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:06.604 "oacs": { 00:18:06.604 "security": 0, 00:18:06.604 "format": 0, 00:18:06.604 "firmware": 0, 00:18:06.604 "ns_manage": 0 00:18:06.604 }, 00:18:06.604 "multi_ctrlr": true, 00:18:06.604 "ana_reporting": false 00:18:06.604 }, 00:18:06.604 "vs": { 00:18:06.604 "nvme_version": "1.3" 00:18:06.604 }, 00:18:06.604 "ns_data": { 00:18:06.604 "id": 1, 00:18:06.604 "can_share": true 00:18:06.604 } 00:18:06.604 } 00:18:06.604 ], 00:18:06.604 "mp_policy": "active_passive" 00:18:06.604 } 00:18:06.604 } 00:18:06.604 ] 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.604 [2024-12-10 04:06:00.569517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:06.604 [2024-12-10 04:06:00.594346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:06.604 [2024-12-10 04:06:00.614909] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.604 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.604 [ 00:18:06.604 { 00:18:06.604 "name": "nvme0n1", 00:18:06.604 "aliases": [ 00:18:06.604 "b5939e51-daac-4036-9b56-7511023f264a" 00:18:06.604 ], 00:18:06.604 "product_name": "NVMe disk", 00:18:06.604 "block_size": 512, 00:18:06.604 "num_blocks": 2097152, 00:18:06.604 "uuid": "b5939e51-daac-4036-9b56-7511023f264a", 00:18:06.604 "numa_id": 0, 00:18:06.604 "assigned_rate_limits": { 00:18:06.604 "rw_ios_per_sec": 0, 00:18:06.604 "rw_mbytes_per_sec": 0, 00:18:06.604 "r_mbytes_per_sec": 0, 00:18:06.604 "w_mbytes_per_sec": 0 00:18:06.604 }, 00:18:06.604 "claimed": false, 00:18:06.604 "zoned": false, 00:18:06.604 "supported_io_types": { 00:18:06.604 "read": true, 00:18:06.604 "write": true, 00:18:06.604 "unmap": false, 00:18:06.604 "flush": true, 00:18:06.604 "reset": true, 00:18:06.604 "nvme_admin": true, 00:18:06.604 "nvme_io": true, 00:18:06.604 "nvme_io_md": false, 00:18:06.604 "write_zeroes": true, 00:18:06.604 "zcopy": false, 00:18:06.604 "get_zone_info": false, 00:18:06.604 "zone_management": false, 00:18:06.604 "zone_append": false, 00:18:06.604 "compare": true, 00:18:06.604 "compare_and_write": true, 00:18:06.604 "abort": true, 00:18:06.604 "seek_hole": false, 00:18:06.604 "seek_data": false, 00:18:06.604 "copy": true, 00:18:06.604 "nvme_iov_md": false 00:18:06.604 }, 00:18:06.604 "memory_domains": [ 00:18:06.604 { 00:18:06.604 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:06.604 "dma_device_type": 0 00:18:06.604 } 00:18:06.604 ], 00:18:06.604 "driver_specific": { 00:18:06.604 "nvme": [ 00:18:06.604 { 00:18:06.604 "trid": { 00:18:06.604 "trtype": "RDMA", 00:18:06.604 "adrfam": "IPv4", 00:18:06.604 "traddr": "192.168.100.8", 00:18:06.604 "trsvcid": "4420", 00:18:06.604 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:06.604 }, 00:18:06.604 "ctrlr_data": { 00:18:06.604 "cntlid": 2, 00:18:06.605 "vendor_id": "0x8086", 00:18:06.605 "model_number": "SPDK bdev Controller", 00:18:06.605 "serial_number": "00000000000000000000", 00:18:06.605 "firmware_revision": "25.01", 00:18:06.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:06.605 "oacs": { 00:18:06.605 "security": 0, 00:18:06.605 "format": 0, 00:18:06.605 "firmware": 0, 00:18:06.605 "ns_manage": 0 00:18:06.605 }, 00:18:06.605 "multi_ctrlr": true, 00:18:06.605 "ana_reporting": false 00:18:06.605 }, 00:18:06.605 "vs": { 00:18:06.605 "nvme_version": "1.3" 00:18:06.605 }, 00:18:06.605 "ns_data": { 00:18:06.605 "id": 1, 00:18:06.605 "can_share": true 00:18:06.605 } 00:18:06.605 } 00:18:06.605 ], 00:18:06.605 "mp_policy": "active_passive" 00:18:06.605 } 00:18:06.605 } 00:18:06.605 ] 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.1KL7z3QjPG 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.1KL7z3QjPG 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.1KL7z3QjPG 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.605 [2024-12-10 04:06:00.697499] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.605 [2024-12-10 04:06:00.717556] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.605 nvme0n1 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.605 [ 00:18:06.605 { 00:18:06.605 "name": "nvme0n1", 00:18:06.605 "aliases": [ 00:18:06.605 "b5939e51-daac-4036-9b56-7511023f264a" 00:18:06.605 ], 00:18:06.605 "product_name": "NVMe disk", 00:18:06.605 "block_size": 512, 00:18:06.605 "num_blocks": 2097152, 00:18:06.605 "uuid": "b5939e51-daac-4036-9b56-7511023f264a", 00:18:06.605 "numa_id": 0, 00:18:06.605 "assigned_rate_limits": { 00:18:06.605 "rw_ios_per_sec": 0, 00:18:06.605 "rw_mbytes_per_sec": 0, 00:18:06.605 "r_mbytes_per_sec": 0, 00:18:06.605 "w_mbytes_per_sec": 0 00:18:06.605 }, 00:18:06.605 "claimed": false, 00:18:06.605 "zoned": false, 00:18:06.605 "supported_io_types": { 00:18:06.605 "read": true, 00:18:06.605 "write": true, 00:18:06.605 "unmap": false, 00:18:06.605 "flush": true, 00:18:06.605 "reset": true, 00:18:06.605 "nvme_admin": true, 00:18:06.605 "nvme_io": true, 00:18:06.605 "nvme_io_md": false, 00:18:06.605 "write_zeroes": true, 00:18:06.605 "zcopy": false, 00:18:06.605 "get_zone_info": false, 00:18:06.605 "zone_management": false, 00:18:06.605 "zone_append": false, 00:18:06.605 "compare": true, 00:18:06.605 "compare_and_write": true, 00:18:06.605 "abort": true, 00:18:06.605 "seek_hole": false, 00:18:06.605 "seek_data": false, 00:18:06.605 "copy": true, 00:18:06.605 "nvme_iov_md": false 00:18:06.605 }, 00:18:06.605 "memory_domains": [ 00:18:06.605 { 00:18:06.605 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:06.605 "dma_device_type": 0 00:18:06.605 } 00:18:06.605 ], 00:18:06.605 "driver_specific": { 00:18:06.605 "nvme": [ 00:18:06.605 { 00:18:06.605 "trid": { 00:18:06.605 "trtype": "RDMA", 00:18:06.605 "adrfam": "IPv4", 00:18:06.605 "traddr": "192.168.100.8", 00:18:06.605 "trsvcid": "4421", 00:18:06.605 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:06.605 }, 00:18:06.605 "ctrlr_data": { 00:18:06.605 "cntlid": 3, 00:18:06.605 "vendor_id": "0x8086", 00:18:06.605 "model_number": "SPDK bdev Controller", 00:18:06.605 "serial_number": "00000000000000000000", 00:18:06.605 "firmware_revision": "25.01", 00:18:06.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:06.605 "oacs": { 00:18:06.605 "security": 0, 00:18:06.605 "format": 0, 00:18:06.605 "firmware": 0, 00:18:06.605 "ns_manage": 0 00:18:06.605 }, 00:18:06.605 "multi_ctrlr": true, 00:18:06.605 "ana_reporting": false 00:18:06.605 }, 00:18:06.605 "vs": { 00:18:06.605 "nvme_version": "1.3" 00:18:06.605 }, 00:18:06.605 "ns_data": { 00:18:06.605 "id": 1, 00:18:06.605 "can_share": true 00:18:06.605 } 00:18:06.605 } 00:18:06.605 ], 00:18:06.605 "mp_policy": "active_passive" 00:18:06.605 } 00:18:06.605 } 00:18:06.605 ] 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.1KL7z3QjPG 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:06.605 rmmod nvme_rdma 00:18:06.605 rmmod nvme_fabrics 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 803033 ']' 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 803033 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 803033 ']' 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 803033 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 803033 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 803033' 00:18:06.605 killing process with pid 803033 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 803033 00:18:06.605 04:06:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 803033 00:18:06.863 04:06:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:06.863 04:06:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:06.863 00:18:06.863 real 0m6.973s 00:18:06.863 user 0m2.844s 00:18:06.863 sys 0m4.632s 00:18:06.863 04:06:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.863 04:06:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:06.863 ************************************ 00:18:06.863 END TEST nvmf_async_init 00:18:06.863 ************************************ 00:18:06.863 04:06:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:18:06.863 04:06:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:06.863 04:06:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.863 04:06:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.863 ************************************ 00:18:06.863 START TEST dma 00:18:06.863 ************************************ 00:18:06.863 04:06:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:18:07.121 * Looking for test storage... 00:18:07.121 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:07.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.121 --rc genhtml_branch_coverage=1 00:18:07.121 --rc genhtml_function_coverage=1 00:18:07.121 --rc genhtml_legend=1 00:18:07.121 --rc geninfo_all_blocks=1 00:18:07.121 --rc geninfo_unexecuted_blocks=1 00:18:07.121 00:18:07.121 ' 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:07.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.121 --rc genhtml_branch_coverage=1 00:18:07.121 --rc genhtml_function_coverage=1 00:18:07.121 --rc genhtml_legend=1 00:18:07.121 --rc geninfo_all_blocks=1 00:18:07.121 --rc geninfo_unexecuted_blocks=1 00:18:07.121 00:18:07.121 ' 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:07.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.121 --rc genhtml_branch_coverage=1 00:18:07.121 --rc genhtml_function_coverage=1 00:18:07.121 --rc genhtml_legend=1 00:18:07.121 --rc geninfo_all_blocks=1 00:18:07.121 --rc geninfo_unexecuted_blocks=1 00:18:07.121 00:18:07.121 ' 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:07.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.121 --rc genhtml_branch_coverage=1 00:18:07.121 --rc genhtml_function_coverage=1 00:18:07.121 --rc genhtml_legend=1 00:18:07.121 --rc geninfo_all_blocks=1 00:18:07.121 --rc geninfo_unexecuted_blocks=1 00:18:07.121 00:18:07.121 ' 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.121 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:07.122 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:18:07.122 04:06:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:13.676 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:13.676 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:18:13.676 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:13.676 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:13.676 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:13.676 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:13.676 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:13.676 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:18:13.676 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:13.676 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:18:13.676 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:18:13.676 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:18:13.676 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:13.677 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:13.677 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:13.677 Found net devices under 0000:18:00.0: mlx_0_0 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:13.677 Found net devices under 0000:18:00.1: mlx_0_1 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:13.677 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:13.677 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:13.677 altname enp24s0f0np0 00:18:13.677 altname ens785f0np0 00:18:13.677 inet 192.168.100.8/24 scope global mlx_0_0 00:18:13.677 valid_lft forever preferred_lft forever 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:13.677 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:13.677 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:13.677 altname enp24s0f1np1 00:18:13.677 altname ens785f1np1 00:18:13.677 inet 192.168.100.9/24 scope global mlx_0_1 00:18:13.677 valid_lft forever preferred_lft forever 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:13.677 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:13.678 192.168.100.9' 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:13.678 192.168.100.9' 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:13.678 192.168.100.9' 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=807152 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 807152 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 807152 ']' 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:13.678 [2024-12-10 04:06:07.353241] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:13.678 [2024-12-10 04:06:07.353299] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.678 [2024-12-10 04:06:07.412739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:13.678 [2024-12-10 04:06:07.453866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.678 [2024-12-10 04:06:07.453898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.678 [2024-12-10 04:06:07.453905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.678 [2024-12-10 04:06:07.453910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.678 [2024-12-10 04:06:07.453915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.678 [2024-12-10 04:06:07.454961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.678 [2024-12-10 04:06:07.454963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:13.678 [2024-12-10 04:06:07.604717] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x172d940/0x1731e30) succeed. 00:18:13.678 [2024-12-10 04:06:07.612804] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x172ee90/0x17734d0) succeed. 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:13.678 Malloc0 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:13.678 [2024-12-10 04:06:07.768830] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:13.678 { 00:18:13.678 "params": { 00:18:13.678 "name": "Nvme$subsystem", 00:18:13.678 "trtype": "$TEST_TRANSPORT", 00:18:13.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:13.678 "adrfam": "ipv4", 00:18:13.678 "trsvcid": "$NVMF_PORT", 00:18:13.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:13.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:13.678 "hdgst": ${hdgst:-false}, 00:18:13.678 "ddgst": ${ddgst:-false} 00:18:13.678 }, 00:18:13.678 "method": "bdev_nvme_attach_controller" 00:18:13.678 } 00:18:13.678 EOF 00:18:13.678 )") 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:18:13.678 04:06:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:13.678 "params": { 00:18:13.678 "name": "Nvme0", 00:18:13.678 "trtype": "rdma", 00:18:13.678 "traddr": "192.168.100.8", 00:18:13.678 "adrfam": "ipv4", 00:18:13.678 "trsvcid": "4420", 00:18:13.678 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:13.678 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:13.678 "hdgst": false, 00:18:13.678 "ddgst": false 00:18:13.678 }, 00:18:13.678 "method": "bdev_nvme_attach_controller" 00:18:13.678 }' 00:18:13.678 [2024-12-10 04:06:07.815219] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:13.678 [2024-12-10 04:06:07.815260] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid807374 ] 00:18:13.679 [2024-12-10 04:06:07.869283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:13.679 [2024-12-10 04:06:07.908134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:13.679 [2024-12-10 04:06:07.908137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.934 bdev Nvme0n1 reports 1 memory domains 00:18:18.934 bdev Nvme0n1 supports RDMA memory domain 00:18:18.934 Initialization complete, running randrw IO for 5 sec on 2 cores 00:18:18.934 ========================================================================== 00:18:18.934 Latency [us] 00:18:18.934 IOPS MiB/s Average min max 00:18:18.934 Core 2: 22253.33 86.93 718.36 239.89 9567.84 00:18:18.934 Core 3: 22149.95 86.52 721.71 232.45 9661.00 00:18:18.934 ========================================================================== 00:18:18.934 Total : 44403.28 173.45 720.03 232.45 9661.00 00:18:18.934 00:18:18.934 Total operations: 222039, translate 222039 pull_push 0 memzero 0 00:18:18.934 04:06:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:18:18.934 04:06:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:18:18.934 04:06:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:18:19.192 [2024-12-10 04:06:13.325113] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:19.192 [2024-12-10 04:06:13.325163] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid808239 ] 00:18:19.192 [2024-12-10 04:06:13.380606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:19.192 [2024-12-10 04:06:13.416356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.192 [2024-12-10 04:06:13.416358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.467 bdev Malloc0 reports 2 memory domains 00:18:24.467 bdev Malloc0 doesn't support RDMA memory domain 00:18:24.467 Initialization complete, running randrw IO for 5 sec on 2 cores 00:18:24.467 ========================================================================== 00:18:24.467 Latency [us] 00:18:24.467 IOPS MiB/s Average min max 00:18:24.467 Core 2: 14700.18 57.42 1087.76 385.00 1376.35 00:18:24.467 Core 3: 14955.70 58.42 1069.16 414.04 1960.80 00:18:24.467 ========================================================================== 00:18:24.467 Total : 29655.88 115.84 1078.38 385.00 1960.80 00:18:24.467 00:18:24.467 Total operations: 148326, translate 0 pull_push 593304 memzero 0 00:18:24.467 04:06:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:18:24.467 04:06:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:18:24.467 04:06:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:18:24.467 04:06:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:18:24.467 Ignoring -M option 00:18:24.467 [2024-12-10 04:06:18.730115] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:24.467 [2024-12-10 04:06:18.730163] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid809286 ] 00:18:24.467 [2024-12-10 04:06:18.783885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:24.467 [2024-12-10 04:06:18.819146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:24.467 [2024-12-10 04:06:18.819149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.012 bdev 38cc2a42-bb07-40ff-8025-37ca4e988d5d reports 1 memory domains 00:18:31.012 bdev 38cc2a42-bb07-40ff-8025-37ca4e988d5d supports RDMA memory domain 00:18:31.012 Initialization complete, running randread IO for 5 sec on 2 cores 00:18:31.012 ========================================================================== 00:18:31.012 Latency [us] 00:18:31.012 IOPS MiB/s Average min max 00:18:31.012 Core 2: 76437.49 298.58 208.59 81.00 3026.89 00:18:31.012 Core 3: 77849.36 304.10 204.79 74.92 3128.56 00:18:31.012 ========================================================================== 00:18:31.012 Total : 154286.85 602.68 206.67 74.92 3128.56 00:18:31.012 00:18:31.012 Total operations: 771508, translate 0 pull_push 0 memzero 771508 00:18:31.012 04:06:24 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:18:31.012 [2024-12-10 04:06:24.335697] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:32.377 Initializing NVMe Controllers 00:18:32.377 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:18:32.377 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:18:32.377 Initialization complete. Launching workers. 00:18:32.377 ======================================================== 00:18:32.377 Latency(us) 00:18:32.377 Device Information : IOPS MiB/s Average min max 00:18:32.377 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7980.53 4987.68 10970.75 00:18:32.377 ======================================================== 00:18:32.377 Total : 2016.00 7.88 7980.53 4987.68 10970.75 00:18:32.377 00:18:32.377 04:06:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:18:32.377 04:06:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:18:32.377 04:06:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:18:32.377 04:06:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:18:32.377 [2024-12-10 04:06:26.675188] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:32.377 [2024-12-10 04:06:26.675228] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810647 ] 00:18:32.377 [2024-12-10 04:06:26.728562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:32.634 [2024-12-10 04:06:26.767828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.634 [2024-12-10 04:06:26.767832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.885 bdev 4296250c-7582-4505-866c-4aef23cb4bed reports 1 memory domains 00:18:37.885 bdev 4296250c-7582-4505-866c-4aef23cb4bed supports RDMA memory domain 00:18:37.885 Initialization complete, running randrw IO for 5 sec on 2 cores 00:18:37.885 ========================================================================== 00:18:37.885 Latency [us] 00:18:37.885 IOPS MiB/s Average min max 00:18:37.885 Core 2: 19561.21 76.41 817.27 14.62 11661.87 00:18:37.885 Core 3: 19889.56 77.69 803.79 17.97 11681.54 00:18:37.885 ========================================================================== 00:18:37.885 Total : 39450.77 154.10 810.47 14.62 11681.54 00:18:37.885 00:18:37.885 Total operations: 197282, translate 197176 pull_push 0 memzero 106 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:37.885 rmmod nvme_rdma 00:18:37.885 rmmod nvme_fabrics 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 807152 ']' 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 807152 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 807152 ']' 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 807152 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.885 04:06:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 807152 00:18:38.143 04:06:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.143 04:06:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.143 04:06:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 807152' 00:18:38.143 killing process with pid 807152 00:18:38.143 04:06:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 807152 00:18:38.143 04:06:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 807152 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:38.400 00:18:38.400 real 0m31.364s 00:18:38.400 user 1m34.389s 00:18:38.400 sys 0m5.535s 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:38.400 ************************************ 00:18:38.400 END TEST dma 00:18:38.400 ************************************ 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.400 ************************************ 00:18:38.400 START TEST nvmf_identify 00:18:38.400 ************************************ 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:18:38.400 * Looking for test storage... 00:18:38.400 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:38.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.400 --rc genhtml_branch_coverage=1 00:18:38.400 --rc genhtml_function_coverage=1 00:18:38.400 --rc genhtml_legend=1 00:18:38.400 --rc geninfo_all_blocks=1 00:18:38.400 --rc geninfo_unexecuted_blocks=1 00:18:38.400 00:18:38.400 ' 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:38.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.400 --rc genhtml_branch_coverage=1 00:18:38.400 --rc genhtml_function_coverage=1 00:18:38.400 --rc genhtml_legend=1 00:18:38.400 --rc geninfo_all_blocks=1 00:18:38.400 --rc geninfo_unexecuted_blocks=1 00:18:38.400 00:18:38.400 ' 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:38.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.400 --rc genhtml_branch_coverage=1 00:18:38.400 --rc genhtml_function_coverage=1 00:18:38.400 --rc genhtml_legend=1 00:18:38.400 --rc geninfo_all_blocks=1 00:18:38.400 --rc geninfo_unexecuted_blocks=1 00:18:38.400 00:18:38.400 ' 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:38.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.400 --rc genhtml_branch_coverage=1 00:18:38.400 --rc genhtml_function_coverage=1 00:18:38.400 --rc genhtml_legend=1 00:18:38.400 --rc geninfo_all_blocks=1 00:18:38.400 --rc geninfo_unexecuted_blocks=1 00:18:38.400 00:18:38.400 ' 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.400 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:18:38.657 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.657 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.657 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.657 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.657 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.657 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.657 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.657 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.657 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:38.658 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:18:38.658 04:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:45.216 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:45.216 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:45.216 Found net devices under 0000:18:00.0: mlx_0_0 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:45.216 Found net devices under 0000:18:00.1: mlx_0_1 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:45.216 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:45.217 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:45.217 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:45.217 altname enp24s0f0np0 00:18:45.217 altname ens785f0np0 00:18:45.217 inet 192.168.100.8/24 scope global mlx_0_0 00:18:45.217 valid_lft forever preferred_lft forever 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:45.217 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:45.217 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:45.217 altname enp24s0f1np1 00:18:45.217 altname ens785f1np1 00:18:45.217 inet 192.168.100.9/24 scope global mlx_0_1 00:18:45.217 valid_lft forever preferred_lft forever 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:45.217 192.168.100.9' 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:45.217 192.168.100.9' 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:45.217 192.168.100.9' 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=815008 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 815008 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 815008 ']' 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:45.217 [2024-12-10 04:06:38.611510] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:45.217 [2024-12-10 04:06:38.611554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.217 [2024-12-10 04:06:38.669242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.217 [2024-12-10 04:06:38.709584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.217 [2024-12-10 04:06:38.709618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.217 [2024-12-10 04:06:38.709625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.217 [2024-12-10 04:06:38.709631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.217 [2024-12-10 04:06:38.709635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.217 [2024-12-10 04:06:38.710861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.217 [2024-12-10 04:06:38.710958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.217 [2024-12-10 04:06:38.711034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.217 [2024-12-10 04:06:38.711035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.217 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:45.218 [2024-12-10 04:06:38.838318] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13660c0/0x136a5b0) succeed. 00:18:45.218 [2024-12-10 04:06:38.846554] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1367750/0x13abc50) succeed. 00:18:45.218 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.218 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:45.218 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.218 04:06:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:45.218 Malloc0 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:45.218 [2024-12-10 04:06:39.050907] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:45.218 [ 00:18:45.218 { 00:18:45.218 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:45.218 "subtype": "Discovery", 00:18:45.218 "listen_addresses": [ 00:18:45.218 { 00:18:45.218 "trtype": "RDMA", 00:18:45.218 "adrfam": "IPv4", 00:18:45.218 "traddr": "192.168.100.8", 00:18:45.218 "trsvcid": "4420" 00:18:45.218 } 00:18:45.218 ], 00:18:45.218 "allow_any_host": true, 00:18:45.218 "hosts": [] 00:18:45.218 }, 00:18:45.218 { 00:18:45.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.218 "subtype": "NVMe", 00:18:45.218 "listen_addresses": [ 00:18:45.218 { 00:18:45.218 "trtype": "RDMA", 00:18:45.218 "adrfam": "IPv4", 00:18:45.218 "traddr": "192.168.100.8", 00:18:45.218 "trsvcid": "4420" 00:18:45.218 } 00:18:45.218 ], 00:18:45.218 "allow_any_host": true, 00:18:45.218 "hosts": [], 00:18:45.218 "serial_number": "SPDK00000000000001", 00:18:45.218 "model_number": "SPDK bdev Controller", 00:18:45.218 "max_namespaces": 32, 00:18:45.218 "min_cntlid": 1, 00:18:45.218 "max_cntlid": 65519, 00:18:45.218 "namespaces": [ 00:18:45.218 { 00:18:45.218 "nsid": 1, 00:18:45.218 "bdev_name": "Malloc0", 00:18:45.218 "name": "Malloc0", 00:18:45.218 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:45.218 "eui64": "ABCDEF0123456789", 00:18:45.218 "uuid": "794ebc74-fa14-40f9-b3f1-69f7d597b204" 00:18:45.218 } 00:18:45.218 ] 00:18:45.218 } 00:18:45.218 ] 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.218 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:45.218 [2024-12-10 04:06:39.101730] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:45.218 [2024-12-10 04:06:39.101767] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815170 ] 00:18:45.218 [2024-12-10 04:06:39.156426] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:18:45.218 [2024-12-10 04:06:39.156502] nvme_rdma.c:2448:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:18:45.218 [2024-12-10 04:06:39.156513] nvme_rdma.c:1235:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:18:45.218 [2024-12-10 04:06:39.156516] nvme_rdma.c:1239:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:18:45.218 [2024-12-10 04:06:39.156544] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:18:45.218 [2024-12-10 04:06:39.164990] nvme_rdma.c: 456:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:18:45.218 [2024-12-10 04:06:39.176665] nvme_rdma.c:1121:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:45.218 [2024-12-10 04:06:39.176675] nvme_rdma.c:1126:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:18:45.218 [2024-12-10 04:06:39.176685] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176690] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176694] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176698] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176702] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176706] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176710] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176714] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176718] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176722] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176726] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176730] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176734] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176739] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176743] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176747] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176751] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176755] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176759] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176763] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176767] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176771] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176775] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176779] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176783] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176787] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176791] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176795] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176799] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176803] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176807] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176811] nvme_rdma.c:1140:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:18:45.218 [2024-12-10 04:06:39.176817] nvme_rdma.c:1143:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:45.218 [2024-12-10 04:06:39.176820] nvme_rdma.c:1148:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:18:45.218 [2024-12-10 04:06:39.176839] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.176850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd0c0 len:0x400 key:0x184d00 00:18:45.218 [2024-12-10 04:06:39.182272] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.218 [2024-12-10 04:06:39.182282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:45.218 [2024-12-10 04:06:39.182288] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x184d00 00:18:45.218 [2024-12-10 04:06:39.182293] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:45.218 [2024-12-10 04:06:39.182300] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:18:45.219 [2024-12-10 04:06:39.182304] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:18:45.219 [2024-12-10 04:06:39.182316] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.219 [2024-12-10 04:06:39.182343] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.219 [2024-12-10 04:06:39.182348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:18:45.219 [2024-12-10 04:06:39.182353] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:18:45.219 [2024-12-10 04:06:39.182356] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182361] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:18:45.219 [2024-12-10 04:06:39.182366] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.219 [2024-12-10 04:06:39.182389] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.219 [2024-12-10 04:06:39.182393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:18:45.219 [2024-12-10 04:06:39.182397] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:18:45.219 [2024-12-10 04:06:39.182401] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:18:45.219 [2024-12-10 04:06:39.182412] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.219 [2024-12-10 04:06:39.182438] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.219 [2024-12-10 04:06:39.182442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:45.219 [2024-12-10 04:06:39.182447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:45.219 [2024-12-10 04:06:39.182453] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182460] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.219 [2024-12-10 04:06:39.182486] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.219 [2024-12-10 04:06:39.182490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:45.219 [2024-12-10 04:06:39.182495] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:18:45.219 [2024-12-10 04:06:39.182499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:18:45.219 [2024-12-10 04:06:39.182502] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182507] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:45.219 [2024-12-10 04:06:39.182614] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:18:45.219 [2024-12-10 04:06:39.182618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:45.219 [2024-12-10 04:06:39.182626] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.219 [2024-12-10 04:06:39.182649] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.219 [2024-12-10 04:06:39.182653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:45.219 [2024-12-10 04:06:39.182658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:45.219 [2024-12-10 04:06:39.182661] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182667] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.219 [2024-12-10 04:06:39.182692] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.219 [2024-12-10 04:06:39.182696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:45.219 [2024-12-10 04:06:39.182701] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:45.219 [2024-12-10 04:06:39.182704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:18:45.219 [2024-12-10 04:06:39.182708] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182713] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:18:45.219 [2024-12-10 04:06:39.182719] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:18:45.219 [2024-12-10 04:06:39.182728] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184d00 00:18:45.219 [2024-12-10 04:06:39.182768] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.219 [2024-12-10 04:06:39.182772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:45.219 [2024-12-10 04:06:39.182778] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:18:45.219 [2024-12-10 04:06:39.182782] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:18:45.219 [2024-12-10 04:06:39.182786] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:18:45.219 [2024-12-10 04:06:39.182790] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:18:45.219 [2024-12-10 04:06:39.182796] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:18:45.219 [2024-12-10 04:06:39.182800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:18:45.219 [2024-12-10 04:06:39.182803] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:18:45.219 [2024-12-10 04:06:39.182814] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.219 [2024-12-10 04:06:39.182842] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.219 [2024-12-10 04:06:39.182846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:45.219 [2024-12-10 04:06:39.182855] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce3c0 length 0x40 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.219 [2024-12-10 04:06:39.182865] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce500 length 0x40 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.219 [2024-12-10 04:06:39.182874] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.219 [2024-12-10 04:06:39.182883] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.219 [2024-12-10 04:06:39.182892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:45.219 [2024-12-10 04:06:39.182896] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:45.219 [2024-12-10 04:06:39.182908] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.219 [2024-12-10 04:06:39.182929] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.219 [2024-12-10 04:06:39.182933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:18:45.219 [2024-12-10 04:06:39.182940] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:18:45.219 [2024-12-10 04:06:39.182944] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:18:45.219 [2024-12-10 04:06:39.182948] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182955] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.182960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184d00 00:18:45.219 [2024-12-10 04:06:39.182983] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.219 [2024-12-10 04:06:39.182987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:45.219 [2024-12-10 04:06:39.182992] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x184d00 00:18:45.219 [2024-12-10 04:06:39.183000] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:18:45.220 [2024-12-10 04:06:39.183022] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.220 [2024-12-10 04:06:39.183028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x400 key:0x184d00 00:18:45.220 [2024-12-10 04:06:39.183034] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x184d00 00:18:45.220 [2024-12-10 04:06:39.183038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.220 [2024-12-10 04:06:39.183056] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.220 [2024-12-10 04:06:39.183061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:45.220 [2024-12-10 04:06:39.183069] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cea00 length 0x40 lkey 0x184d00 00:18:45.220 [2024-12-10 04:06:39.183075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0xc00 key:0x184d00 00:18:45.220 [2024-12-10 04:06:39.183079] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x184d00 00:18:45.220 [2024-12-10 04:06:39.183083] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.220 [2024-12-10 04:06:39.183087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:45.220 [2024-12-10 04:06:39.183091] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x184d00 00:18:45.220 [2024-12-10 04:06:39.183104] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.220 [2024-12-10 04:06:39.183108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:45.220 [2024-12-10 04:06:39.183117] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x184d00 00:18:45.220 [2024-12-10 04:06:39.183123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x8 key:0x184d00 00:18:45.220 [2024-12-10 04:06:39.183127] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x184d00 00:18:45.220 [2024-12-10 04:06:39.183148] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.220 [2024-12-10 04:06:39.183153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:45.220 [2024-12-10 04:06:39.183161] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x184d00 00:18:45.220 ===================================================== 00:18:45.220 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:45.220 ===================================================== 00:18:45.220 Controller Capabilities/Features 00:18:45.220 ================================ 00:18:45.220 Vendor ID: 0000 00:18:45.220 Subsystem Vendor ID: 0000 00:18:45.220 Serial Number: .................... 00:18:45.220 Model Number: ........................................ 00:18:45.220 Firmware Version: 25.01 00:18:45.220 Recommended Arb Burst: 0 00:18:45.220 IEEE OUI Identifier: 00 00 00 00:18:45.220 Multi-path I/O 00:18:45.220 May have multiple subsystem ports: No 00:18:45.220 May have multiple controllers: No 00:18:45.220 Associated with SR-IOV VF: No 00:18:45.220 Max Data Transfer Size: 131072 00:18:45.220 Max Number of Namespaces: 0 00:18:45.220 Max Number of I/O Queues: 1024 00:18:45.220 NVMe Specification Version (VS): 1.3 00:18:45.220 NVMe Specification Version (Identify): 1.3 00:18:45.220 Maximum Queue Entries: 128 00:18:45.220 Contiguous Queues Required: Yes 00:18:45.220 Arbitration Mechanisms Supported 00:18:45.220 Weighted Round Robin: Not Supported 00:18:45.220 Vendor Specific: Not Supported 00:18:45.220 Reset Timeout: 15000 ms 00:18:45.220 Doorbell Stride: 4 bytes 00:18:45.220 NVM Subsystem Reset: Not Supported 00:18:45.220 Command Sets Supported 00:18:45.220 NVM Command Set: Supported 00:18:45.220 Boot Partition: Not Supported 00:18:45.220 Memory Page Size Minimum: 4096 bytes 00:18:45.220 Memory Page Size Maximum: 4096 bytes 00:18:45.220 Persistent Memory Region: Not Supported 00:18:45.220 Optional Asynchronous Events Supported 00:18:45.220 Namespace Attribute Notices: Not Supported 00:18:45.220 Firmware Activation Notices: Not Supported 00:18:45.220 ANA Change Notices: Not Supported 00:18:45.220 PLE Aggregate Log Change Notices: Not Supported 00:18:45.220 LBA Status Info Alert Notices: Not Supported 00:18:45.220 EGE Aggregate Log Change Notices: Not Supported 00:18:45.220 Normal NVM Subsystem Shutdown event: Not Supported 00:18:45.220 Zone Descriptor Change Notices: Not Supported 00:18:45.220 Discovery Log Change Notices: Supported 00:18:45.220 Controller Attributes 00:18:45.220 128-bit Host Identifier: Not Supported 00:18:45.220 Non-Operational Permissive Mode: Not Supported 00:18:45.220 NVM Sets: Not Supported 00:18:45.220 Read Recovery Levels: Not Supported 00:18:45.220 Endurance Groups: Not Supported 00:18:45.220 Predictable Latency Mode: Not Supported 00:18:45.220 Traffic Based Keep ALive: Not Supported 00:18:45.220 Namespace Granularity: Not Supported 00:18:45.220 SQ Associations: Not Supported 00:18:45.220 UUID List: Not Supported 00:18:45.220 Multi-Domain Subsystem: Not Supported 00:18:45.220 Fixed Capacity Management: Not Supported 00:18:45.220 Variable Capacity Management: Not Supported 00:18:45.220 Delete Endurance Group: Not Supported 00:18:45.220 Delete NVM Set: Not Supported 00:18:45.220 Extended LBA Formats Supported: Not Supported 00:18:45.220 Flexible Data Placement Supported: Not Supported 00:18:45.220 00:18:45.220 Controller Memory Buffer Support 00:18:45.220 ================================ 00:18:45.220 Supported: No 00:18:45.220 00:18:45.220 Persistent Memory Region Support 00:18:45.220 ================================ 00:18:45.220 Supported: No 00:18:45.220 00:18:45.220 Admin Command Set Attributes 00:18:45.220 ============================ 00:18:45.220 Security Send/Receive: Not Supported 00:18:45.220 Format NVM: Not Supported 00:18:45.220 Firmware Activate/Download: Not Supported 00:18:45.220 Namespace Management: Not Supported 00:18:45.220 Device Self-Test: Not Supported 00:18:45.220 Directives: Not Supported 00:18:45.220 NVMe-MI: Not Supported 00:18:45.220 Virtualization Management: Not Supported 00:18:45.220 Doorbell Buffer Config: Not Supported 00:18:45.220 Get LBA Status Capability: Not Supported 00:18:45.220 Command & Feature Lockdown Capability: Not Supported 00:18:45.220 Abort Command Limit: 1 00:18:45.220 Async Event Request Limit: 4 00:18:45.220 Number of Firmware Slots: N/A 00:18:45.220 Firmware Slot 1 Read-Only: N/A 00:18:45.220 Firmware Activation Without Reset: N/A 00:18:45.220 Multiple Update Detection Support: N/A 00:18:45.220 Firmware Update Granularity: No Information Provided 00:18:45.220 Per-Namespace SMART Log: No 00:18:45.220 Asymmetric Namespace Access Log Page: Not Supported 00:18:45.220 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:45.220 Command Effects Log Page: Not Supported 00:18:45.220 Get Log Page Extended Data: Supported 00:18:45.220 Telemetry Log Pages: Not Supported 00:18:45.220 Persistent Event Log Pages: Not Supported 00:18:45.220 Supported Log Pages Log Page: May Support 00:18:45.220 Commands Supported & Effects Log Page: Not Supported 00:18:45.220 Feature Identifiers & Effects Log Page:May Support 00:18:45.220 NVMe-MI Commands & Effects Log Page: May Support 00:18:45.220 Data Area 4 for Telemetry Log: Not Supported 00:18:45.220 Error Log Page Entries Supported: 128 00:18:45.220 Keep Alive: Not Supported 00:18:45.220 00:18:45.220 NVM Command Set Attributes 00:18:45.220 ========================== 00:18:45.220 Submission Queue Entry Size 00:18:45.220 Max: 1 00:18:45.220 Min: 1 00:18:45.220 Completion Queue Entry Size 00:18:45.220 Max: 1 00:18:45.220 Min: 1 00:18:45.220 Number of Namespaces: 0 00:18:45.220 Compare Command: Not Supported 00:18:45.220 Write Uncorrectable Command: Not Supported 00:18:45.220 Dataset Management Command: Not Supported 00:18:45.220 Write Zeroes Command: Not Supported 00:18:45.220 Set Features Save Field: Not Supported 00:18:45.220 Reservations: Not Supported 00:18:45.220 Timestamp: Not Supported 00:18:45.220 Copy: Not Supported 00:18:45.220 Volatile Write Cache: Not Present 00:18:45.220 Atomic Write Unit (Normal): 1 00:18:45.221 Atomic Write Unit (PFail): 1 00:18:45.221 Atomic Compare & Write Unit: 1 00:18:45.221 Fused Compare & Write: Supported 00:18:45.221 Scatter-Gather List 00:18:45.221 SGL Command Set: Supported 00:18:45.221 SGL Keyed: Supported 00:18:45.221 SGL Bit Bucket Descriptor: Not Supported 00:18:45.221 SGL Metadata Pointer: Not Supported 00:18:45.221 Oversized SGL: Not Supported 00:18:45.221 SGL Metadata Address: Not Supported 00:18:45.221 SGL Offset: Supported 00:18:45.221 Transport SGL Data Block: Not Supported 00:18:45.221 Replay Protected Memory Block: Not Supported 00:18:45.221 00:18:45.221 Firmware Slot Information 00:18:45.221 ========================= 00:18:45.221 Active slot: 0 00:18:45.221 00:18:45.221 00:18:45.221 Error Log 00:18:45.221 ========= 00:18:45.221 00:18:45.221 Active Namespaces 00:18:45.221 ================= 00:18:45.221 Discovery Log Page 00:18:45.221 ================== 00:18:45.221 Generation Counter: 2 00:18:45.221 Number of Records: 2 00:18:45.221 Record Format: 0 00:18:45.221 00:18:45.221 Discovery Log Entry 0 00:18:45.221 ---------------------- 00:18:45.221 Transport Type: 1 (RDMA) 00:18:45.221 Address Family: 1 (IPv4) 00:18:45.221 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:45.221 Entry Flags: 00:18:45.221 Duplicate Returned Information: 1 00:18:45.221 Explicit Persistent Connection Support for Discovery: 1 00:18:45.221 Transport Requirements: 00:18:45.221 Secure Channel: Not Required 00:18:45.221 Port ID: 0 (0x0000) 00:18:45.221 Controller ID: 65535 (0xffff) 00:18:45.221 Admin Max SQ Size: 128 00:18:45.221 Transport Service Identifier: 4420 00:18:45.221 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:45.221 Transport Address: 192.168.100.8 00:18:45.221 Transport Specific Address Subtype - RDMA 00:18:45.221 RDMA QP Service Type: 1 (Reliable Connected) 00:18:45.221 RDMA Provider Type: 1 (No provider specified) 00:18:45.221 RDMA CM Service: 1 (RDMA_CM) 00:18:45.221 Discovery Log Entry 1 00:18:45.221 ---------------------- 00:18:45.221 Transport Type: 1 (RDMA) 00:18:45.221 Address Family: 1 (IPv4) 00:18:45.221 Subsystem Type: 2 (NVM Subsystem) 00:18:45.221 Entry Flags: 00:18:45.221 Duplicate Returned Information: 0 00:18:45.221 Explicit Persistent Connection Support for Discovery: 0 00:18:45.221 Transport Requirements: 00:18:45.221 Secure Channel: Not Required 00:18:45.221 Port ID: 0 (0x0000) 00:18:45.221 Controller ID: 65535 (0xffff) 00:18:45.221 Admin Max SQ Size: [2024-12-10 04:06:39.183221] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:18:45.221 [2024-12-10 04:06:39.183229] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 15987 doesn't match qid 00:18:45.221 [2024-12-10 04:06:39.183239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32571 cdw0:cf9557d0 sqhd:6880 p:0 m:0 dnr:0 00:18:45.221 [2024-12-10 04:06:39.183244] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 15987 doesn't match qid 00:18:45.221 [2024-12-10 04:06:39.183249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32571 cdw0:cf9557d0 sqhd:6880 p:0 m:0 dnr:0 00:18:45.221 [2024-12-10 04:06:39.183254] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 15987 doesn't match qid 00:18:45.221 [2024-12-10 04:06:39.183259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32571 cdw0:cf9557d0 sqhd:6880 p:0 m:0 dnr:0 00:18:45.221 [2024-12-10 04:06:39.183263] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 15987 doesn't match qid 00:18:45.221 [2024-12-10 04:06:39.183273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32571 cdw0:cf9557d0 sqhd:6880 p:0 m:0 dnr:0 00:18:45.221 [2024-12-10 04:06:39.183281] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.221 [2024-12-10 04:06:39.183307] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.221 [2024-12-10 04:06:39.183311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:18:45.221 [2024-12-10 04:06:39.183319] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.221 [2024-12-10 04:06:39.183329] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183348] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.221 [2024-12-10 04:06:39.183352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:45.221 [2024-12-10 04:06:39.183357] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:18:45.221 [2024-12-10 04:06:39.183360] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:18:45.221 [2024-12-10 04:06:39.183364] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183370] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.221 [2024-12-10 04:06:39.183397] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.221 [2024-12-10 04:06:39.183401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:45.221 [2024-12-10 04:06:39.183405] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183412] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.221 [2024-12-10 04:06:39.183435] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.221 [2024-12-10 04:06:39.183439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:45.221 [2024-12-10 04:06:39.183443] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183450] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.221 [2024-12-10 04:06:39.183470] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.221 [2024-12-10 04:06:39.183474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:45.221 [2024-12-10 04:06:39.183478] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183484] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.221 [2024-12-10 04:06:39.183507] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.221 [2024-12-10 04:06:39.183512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:45.221 [2024-12-10 04:06:39.183516] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183522] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.221 [2024-12-10 04:06:39.183542] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.221 [2024-12-10 04:06:39.183547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:45.221 [2024-12-10 04:06:39.183551] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183558] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.221 [2024-12-10 04:06:39.183586] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.221 [2024-12-10 04:06:39.183590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:45.221 [2024-12-10 04:06:39.183594] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183601] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.221 [2024-12-10 04:06:39.183625] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.221 [2024-12-10 04:06:39.183629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:45.221 [2024-12-10 04:06:39.183633] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183639] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.221 [2024-12-10 04:06:39.183659] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.221 [2024-12-10 04:06:39.183663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:45.221 [2024-12-10 04:06:39.183667] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183673] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.221 [2024-12-10 04:06:39.183679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.221 [2024-12-10 04:06:39.183696] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.183701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.183705] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183712] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.183733] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.183737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.183741] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183748] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.183767] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.183771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.183775] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183782] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.183801] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.183805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.183810] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183816] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.183841] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.183845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.183850] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183856] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.183876] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.183880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.183884] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183890] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.183913] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.183917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.183921] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183928] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.183952] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.183956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.183960] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183966] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.183971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.183989] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.183993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.183997] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184003] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.184026] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.184030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.184034] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184040] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.184063] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.184067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.184071] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184077] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.184097] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.184101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.184105] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184112] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.184134] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.184138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.184142] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184149] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.184170] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.184174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.184178] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184184] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.184208] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.184212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.184217] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184223] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.184247] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.184251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.184255] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184263] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.184289] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.184293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.184297] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184303] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.184326] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.184330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.184334] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184340] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.222 [2024-12-10 04:06:39.184346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.222 [2024-12-10 04:06:39.184360] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.222 [2024-12-10 04:06:39.184364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:18:45.222 [2024-12-10 04:06:39.184368] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184375] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184402] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184410] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184416] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184437] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184445] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184452] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184476] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184485] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184492] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184517] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184525] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184532] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184554] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184562] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184569] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184597] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184606] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184612] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184632] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184639] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184646] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184665] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184674] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184680] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184706] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184715] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184722] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184746] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184754] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184760] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184787] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184795] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184802] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184823] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184831] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184837] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184863] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184871] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184877] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184900] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184908] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184914] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184934] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184944] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184950] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.184971] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.184975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.184979] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184986] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.184991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.185005] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.185009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.185013] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.185020] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.223 [2024-12-10 04:06:39.185025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.223 [2024-12-10 04:06:39.185042] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.223 [2024-12-10 04:06:39.185046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:45.223 [2024-12-10 04:06:39.185050] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185057] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185082] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185090] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185097] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185121] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185129] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185135] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185157] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185167] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185173] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185197] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185205] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185212] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185237] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185245] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185252] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185274] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185282] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185288] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185311] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185319] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185325] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185352] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185361] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185367] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185391] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185399] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185405] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185425] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185433] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185439] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185460] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185468] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185475] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185496] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185504] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185510] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185536] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185543] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185550] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185577] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185585] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185592] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185615] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185624] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185630] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185655] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185663] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185670] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.224 [2024-12-10 04:06:39.185692] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.224 [2024-12-10 04:06:39.185696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:18:45.224 [2024-12-10 04:06:39.185700] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x184d00 00:18:45.224 [2024-12-10 04:06:39.185707] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.185726] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.185730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.185734] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185741] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.185760] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.185764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.185768] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185775] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.185794] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.185798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.185802] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185809] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.185836] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.185840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.185844] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185850] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.185876] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.185880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.185884] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185890] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.185913] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.185917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.185921] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185927] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.185950] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.185954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.185958] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185964] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.185969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.185986] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.185990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.185994] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186001] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.186026] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.186030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.186034] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186041] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.186062] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.186066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.186070] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186076] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.186100] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.186104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.186108] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186115] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.186134] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.186138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.186142] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186149] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.186173] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.186177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.186181] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186188] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.186210] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.186214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.186218] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186224] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.186247] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.225 [2024-12-10 04:06:39.186251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:45.225 [2024-12-10 04:06:39.186255] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.186263] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.225 [2024-12-10 04:06:39.190273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.225 [2024-12-10 04:06:39.190294] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.226 [2024-12-10 04:06:39.190298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:001f p:0 m:0 dnr:0 00:18:45.226 [2024-12-10 04:06:39.190302] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.190307] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:18:45.226 128 00:18:45.226 Transport Service Identifier: 4420 00:18:45.226 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:45.226 Transport Address: 192.168.100.8 00:18:45.226 Transport Specific Address Subtype - RDMA 00:18:45.226 RDMA QP Service Type: 1 (Reliable Connected) 00:18:45.226 RDMA Provider Type: 1 (No provider specified) 00:18:45.226 RDMA CM Service: 1 (RDMA_CM) 00:18:45.226 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:45.226 [2024-12-10 04:06:39.256094] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:45.226 [2024-12-10 04:06:39.256138] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815178 ] 00:18:45.226 [2024-12-10 04:06:39.309026] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:18:45.226 [2024-12-10 04:06:39.309088] nvme_rdma.c:2448:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:18:45.226 [2024-12-10 04:06:39.309097] nvme_rdma.c:1235:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:18:45.226 [2024-12-10 04:06:39.309101] nvme_rdma.c:1239:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:18:45.226 [2024-12-10 04:06:39.309122] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:18:45.226 [2024-12-10 04:06:39.319816] nvme_rdma.c: 456:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:18:45.226 [2024-12-10 04:06:39.329485] nvme_rdma.c:1121:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:45.226 [2024-12-10 04:06:39.329493] nvme_rdma.c:1126:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:18:45.226 [2024-12-10 04:06:39.329499] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329504] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329508] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329512] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329516] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329520] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329524] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329530] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329534] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329538] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329542] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329546] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329550] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329554] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329558] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329562] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329566] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329570] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329574] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329578] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329582] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329586] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329590] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329594] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329598] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329602] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329606] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329610] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329614] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329618] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329622] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329625] nvme_rdma.c:1140:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:18:45.226 [2024-12-10 04:06:39.329629] nvme_rdma.c:1143:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:45.226 [2024-12-10 04:06:39.329632] nvme_rdma.c:1148:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:18:45.226 [2024-12-10 04:06:39.329648] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.329658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd0c0 len:0x400 key:0x184d00 00:18:45.226 [2024-12-10 04:06:39.335273] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.226 [2024-12-10 04:06:39.335280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:45.226 [2024-12-10 04:06:39.335285] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.335293] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:45.226 [2024-12-10 04:06:39.335298] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:18:45.226 [2024-12-10 04:06:39.335302] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:18:45.226 [2024-12-10 04:06:39.335312] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.335318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.226 [2024-12-10 04:06:39.335337] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.226 [2024-12-10 04:06:39.335341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:18:45.226 [2024-12-10 04:06:39.335345] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:18:45.226 [2024-12-10 04:06:39.335349] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.335354] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:18:45.226 [2024-12-10 04:06:39.335360] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.335365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.226 [2024-12-10 04:06:39.335384] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.226 [2024-12-10 04:06:39.335388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:18:45.226 [2024-12-10 04:06:39.335392] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:18:45.226 [2024-12-10 04:06:39.335396] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.335401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:18:45.226 [2024-12-10 04:06:39.335406] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.335412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.226 [2024-12-10 04:06:39.335429] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.226 [2024-12-10 04:06:39.335433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:45.226 [2024-12-10 04:06:39.335438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:45.226 [2024-12-10 04:06:39.335441] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.335447] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.226 [2024-12-10 04:06:39.335453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.226 [2024-12-10 04:06:39.335469] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.226 [2024-12-10 04:06:39.335473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:45.226 [2024-12-10 04:06:39.335477] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:18:45.226 [2024-12-10 04:06:39.335482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:18:45.227 [2024-12-10 04:06:39.335486] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:45.227 [2024-12-10 04:06:39.335597] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:18:45.227 [2024-12-10 04:06:39.335601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:45.227 [2024-12-10 04:06:39.335607] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.227 [2024-12-10 04:06:39.335630] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.227 [2024-12-10 04:06:39.335634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:45.227 [2024-12-10 04:06:39.335638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:45.227 [2024-12-10 04:06:39.335642] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335648] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.227 [2024-12-10 04:06:39.335672] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.227 [2024-12-10 04:06:39.335676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:45.227 [2024-12-10 04:06:39.335680] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:45.227 [2024-12-10 04:06:39.335684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:18:45.227 [2024-12-10 04:06:39.335687] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335692] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:18:45.227 [2024-12-10 04:06:39.335698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:18:45.227 [2024-12-10 04:06:39.335705] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184d00 00:18:45.227 [2024-12-10 04:06:39.335743] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.227 [2024-12-10 04:06:39.335747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:45.227 [2024-12-10 04:06:39.335753] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:18:45.227 [2024-12-10 04:06:39.335757] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:18:45.227 [2024-12-10 04:06:39.335760] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:18:45.227 [2024-12-10 04:06:39.335765] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:18:45.227 [2024-12-10 04:06:39.335770] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:18:45.227 [2024-12-10 04:06:39.335775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:18:45.227 [2024-12-10 04:06:39.335778] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:18:45.227 [2024-12-10 04:06:39.335789] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.227 [2024-12-10 04:06:39.335818] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.227 [2024-12-10 04:06:39.335822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:45.227 [2024-12-10 04:06:39.335829] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce3c0 length 0x40 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.227 [2024-12-10 04:06:39.335839] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce500 length 0x40 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.227 [2024-12-10 04:06:39.335848] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.227 [2024-12-10 04:06:39.335858] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.227 [2024-12-10 04:06:39.335866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:45.227 [2024-12-10 04:06:39.335870] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:45.227 [2024-12-10 04:06:39.335881] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.227 [2024-12-10 04:06:39.335904] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.227 [2024-12-10 04:06:39.335908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:18:45.227 [2024-12-10 04:06:39.335913] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:18:45.227 [2024-12-10 04:06:39.335917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:45.227 [2024-12-10 04:06:39.335921] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:18:45.227 [2024-12-10 04:06:39.335933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:45.227 [2024-12-10 04:06:39.335938] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.335943] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.227 [2024-12-10 04:06:39.335965] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.227 [2024-12-10 04:06:39.335969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:18:45.227 [2024-12-10 04:06:39.336017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:18:45.227 [2024-12-10 04:06:39.336021] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.336026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:45.227 [2024-12-10 04:06:39.336033] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.336038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ca000 len:0x1000 key:0x184d00 00:18:45.227 [2024-12-10 04:06:39.336066] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.227 [2024-12-10 04:06:39.336070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:45.227 [2024-12-10 04:06:39.336081] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:18:45.227 [2024-12-10 04:06:39.336088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:18:45.227 [2024-12-10 04:06:39.336092] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.336097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:18:45.227 [2024-12-10 04:06:39.336103] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.336108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184d00 00:18:45.227 [2024-12-10 04:06:39.336136] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.227 [2024-12-10 04:06:39.336140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:45.227 [2024-12-10 04:06:39.336150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:45.227 [2024-12-10 04:06:39.336154] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x184d00 00:18:45.227 [2024-12-10 04:06:39.336160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:45.228 [2024-12-10 04:06:39.336166] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184d00 00:18:45.228 [2024-12-10 04:06:39.336199] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.228 [2024-12-10 04:06:39.336203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:45.228 [2024-12-10 04:06:39.336209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:45.228 [2024-12-10 04:06:39.336213] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:18:45.228 [2024-12-10 04:06:39.336224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:18:45.228 [2024-12-10 04:06:39.336229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:45.228 [2024-12-10 04:06:39.336233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:45.228 [2024-12-10 04:06:39.336238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:18:45.228 [2024-12-10 04:06:39.336242] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:18:45.228 [2024-12-10 04:06:39.336246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:18:45.228 [2024-12-10 04:06:39.336251] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:18:45.228 [2024-12-10 04:06:39.336262] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.228 [2024-12-10 04:06:39.336277] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.228 [2024-12-10 04:06:39.336289] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.228 [2024-12-10 04:06:39.336293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:45.228 [2024-12-10 04:06:39.336298] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336304] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.228 [2024-12-10 04:06:39.336314] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.228 [2024-12-10 04:06:39.336318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:45.228 [2024-12-10 04:06:39.336322] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336334] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.228 [2024-12-10 04:06:39.336338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:45.228 [2024-12-10 04:06:39.336342] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336349] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.228 [2024-12-10 04:06:39.336378] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.228 [2024-12-10 04:06:39.336382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:45.228 [2024-12-10 04:06:39.336385] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336391] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.228 [2024-12-10 04:06:39.336411] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.228 [2024-12-10 04:06:39.336415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:18:45.228 [2024-12-10 04:06:39.336419] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336429] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x2000 key:0x184d00 00:18:45.228 [2024-12-10 04:06:39.336441] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x200 key:0x184d00 00:18:45.228 [2024-12-10 04:06:39.336452] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cea00 length 0x40 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x200 key:0x184d00 00:18:45.228 [2024-12-10 04:06:39.336465] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ceb40 length 0x40 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c5000 len:0x1000 key:0x184d00 00:18:45.228 [2024-12-10 04:06:39.336476] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.228 [2024-12-10 04:06:39.336480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:45.228 [2024-12-10 04:06:39.336487] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336495] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.228 [2024-12-10 04:06:39.336499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:45.228 [2024-12-10 04:06:39.336507] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336511] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.228 [2024-12-10 04:06:39.336514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:45.228 [2024-12-10 04:06:39.336519] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x184d00 00:18:45.228 [2024-12-10 04:06:39.336529] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.228 [2024-12-10 04:06:39.336534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:45.228 [2024-12-10 04:06:39.336540] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x184d00 00:18:45.228 ===================================================== 00:18:45.228 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:45.228 ===================================================== 00:18:45.228 Controller Capabilities/Features 00:18:45.228 ================================ 00:18:45.228 Vendor ID: 8086 00:18:45.228 Subsystem Vendor ID: 8086 00:18:45.228 Serial Number: SPDK00000000000001 00:18:45.228 Model Number: SPDK bdev Controller 00:18:45.228 Firmware Version: 25.01 00:18:45.228 Recommended Arb Burst: 6 00:18:45.228 IEEE OUI Identifier: e4 d2 5c 00:18:45.228 Multi-path I/O 00:18:45.228 May have multiple subsystem ports: Yes 00:18:45.228 May have multiple controllers: Yes 00:18:45.228 Associated with SR-IOV VF: No 00:18:45.228 Max Data Transfer Size: 131072 00:18:45.228 Max Number of Namespaces: 32 00:18:45.228 Max Number of I/O Queues: 127 00:18:45.228 NVMe Specification Version (VS): 1.3 00:18:45.228 NVMe Specification Version (Identify): 1.3 00:18:45.228 Maximum Queue Entries: 128 00:18:45.228 Contiguous Queues Required: Yes 00:18:45.228 Arbitration Mechanisms Supported 00:18:45.228 Weighted Round Robin: Not Supported 00:18:45.228 Vendor Specific: Not Supported 00:18:45.228 Reset Timeout: 15000 ms 00:18:45.228 Doorbell Stride: 4 bytes 00:18:45.228 NVM Subsystem Reset: Not Supported 00:18:45.228 Command Sets Supported 00:18:45.228 NVM Command Set: Supported 00:18:45.228 Boot Partition: Not Supported 00:18:45.228 Memory Page Size Minimum: 4096 bytes 00:18:45.228 Memory Page Size Maximum: 4096 bytes 00:18:45.228 Persistent Memory Region: Not Supported 00:18:45.228 Optional Asynchronous Events Supported 00:18:45.228 Namespace Attribute Notices: Supported 00:18:45.228 Firmware Activation Notices: Not Supported 00:18:45.228 ANA Change Notices: Not Supported 00:18:45.228 PLE Aggregate Log Change Notices: Not Supported 00:18:45.228 LBA Status Info Alert Notices: Not Supported 00:18:45.228 EGE Aggregate Log Change Notices: Not Supported 00:18:45.228 Normal NVM Subsystem Shutdown event: Not Supported 00:18:45.228 Zone Descriptor Change Notices: Not Supported 00:18:45.228 Discovery Log Change Notices: Not Supported 00:18:45.228 Controller Attributes 00:18:45.228 128-bit Host Identifier: Supported 00:18:45.228 Non-Operational Permissive Mode: Not Supported 00:18:45.228 NVM Sets: Not Supported 00:18:45.228 Read Recovery Levels: Not Supported 00:18:45.228 Endurance Groups: Not Supported 00:18:45.228 Predictable Latency Mode: Not Supported 00:18:45.228 Traffic Based Keep ALive: Not Supported 00:18:45.228 Namespace Granularity: Not Supported 00:18:45.228 SQ Associations: Not Supported 00:18:45.229 UUID List: Not Supported 00:18:45.229 Multi-Domain Subsystem: Not Supported 00:18:45.229 Fixed Capacity Management: Not Supported 00:18:45.229 Variable Capacity Management: Not Supported 00:18:45.229 Delete Endurance Group: Not Supported 00:18:45.229 Delete NVM Set: Not Supported 00:18:45.229 Extended LBA Formats Supported: Not Supported 00:18:45.229 Flexible Data Placement Supported: Not Supported 00:18:45.229 00:18:45.229 Controller Memory Buffer Support 00:18:45.229 ================================ 00:18:45.229 Supported: No 00:18:45.229 00:18:45.229 Persistent Memory Region Support 00:18:45.229 ================================ 00:18:45.229 Supported: No 00:18:45.229 00:18:45.229 Admin Command Set Attributes 00:18:45.229 ============================ 00:18:45.229 Security Send/Receive: Not Supported 00:18:45.229 Format NVM: Not Supported 00:18:45.229 Firmware Activate/Download: Not Supported 00:18:45.229 Namespace Management: Not Supported 00:18:45.229 Device Self-Test: Not Supported 00:18:45.229 Directives: Not Supported 00:18:45.229 NVMe-MI: Not Supported 00:18:45.229 Virtualization Management: Not Supported 00:18:45.229 Doorbell Buffer Config: Not Supported 00:18:45.229 Get LBA Status Capability: Not Supported 00:18:45.229 Command & Feature Lockdown Capability: Not Supported 00:18:45.229 Abort Command Limit: 4 00:18:45.229 Async Event Request Limit: 4 00:18:45.229 Number of Firmware Slots: N/A 00:18:45.229 Firmware Slot 1 Read-Only: N/A 00:18:45.229 Firmware Activation Without Reset: N/A 00:18:45.229 Multiple Update Detection Support: N/A 00:18:45.229 Firmware Update Granularity: No Information Provided 00:18:45.229 Per-Namespace SMART Log: No 00:18:45.229 Asymmetric Namespace Access Log Page: Not Supported 00:18:45.229 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:45.229 Command Effects Log Page: Supported 00:18:45.229 Get Log Page Extended Data: Supported 00:18:45.229 Telemetry Log Pages: Not Supported 00:18:45.229 Persistent Event Log Pages: Not Supported 00:18:45.229 Supported Log Pages Log Page: May Support 00:18:45.229 Commands Supported & Effects Log Page: Not Supported 00:18:45.229 Feature Identifiers & Effects Log Page:May Support 00:18:45.229 NVMe-MI Commands & Effects Log Page: May Support 00:18:45.229 Data Area 4 for Telemetry Log: Not Supported 00:18:45.229 Error Log Page Entries Supported: 128 00:18:45.229 Keep Alive: Supported 00:18:45.229 Keep Alive Granularity: 10000 ms 00:18:45.229 00:18:45.229 NVM Command Set Attributes 00:18:45.229 ========================== 00:18:45.229 Submission Queue Entry Size 00:18:45.229 Max: 64 00:18:45.229 Min: 64 00:18:45.229 Completion Queue Entry Size 00:18:45.229 Max: 16 00:18:45.229 Min: 16 00:18:45.229 Number of Namespaces: 32 00:18:45.229 Compare Command: Supported 00:18:45.229 Write Uncorrectable Command: Not Supported 00:18:45.229 Dataset Management Command: Supported 00:18:45.229 Write Zeroes Command: Supported 00:18:45.229 Set Features Save Field: Not Supported 00:18:45.229 Reservations: Supported 00:18:45.229 Timestamp: Not Supported 00:18:45.229 Copy: Supported 00:18:45.229 Volatile Write Cache: Present 00:18:45.229 Atomic Write Unit (Normal): 1 00:18:45.229 Atomic Write Unit (PFail): 1 00:18:45.229 Atomic Compare & Write Unit: 1 00:18:45.229 Fused Compare & Write: Supported 00:18:45.229 Scatter-Gather List 00:18:45.229 SGL Command Set: Supported 00:18:45.229 SGL Keyed: Supported 00:18:45.229 SGL Bit Bucket Descriptor: Not Supported 00:18:45.229 SGL Metadata Pointer: Not Supported 00:18:45.229 Oversized SGL: Not Supported 00:18:45.229 SGL Metadata Address: Not Supported 00:18:45.229 SGL Offset: Supported 00:18:45.229 Transport SGL Data Block: Not Supported 00:18:45.229 Replay Protected Memory Block: Not Supported 00:18:45.229 00:18:45.229 Firmware Slot Information 00:18:45.229 ========================= 00:18:45.229 Active slot: 1 00:18:45.229 Slot 1 Firmware Revision: 25.01 00:18:45.229 00:18:45.229 00:18:45.229 Commands Supported and Effects 00:18:45.229 ============================== 00:18:45.229 Admin Commands 00:18:45.229 -------------- 00:18:45.229 Get Log Page (02h): Supported 00:18:45.229 Identify (06h): Supported 00:18:45.229 Abort (08h): Supported 00:18:45.229 Set Features (09h): Supported 00:18:45.229 Get Features (0Ah): Supported 00:18:45.229 Asynchronous Event Request (0Ch): Supported 00:18:45.229 Keep Alive (18h): Supported 00:18:45.229 I/O Commands 00:18:45.229 ------------ 00:18:45.229 Flush (00h): Supported LBA-Change 00:18:45.229 Write (01h): Supported LBA-Change 00:18:45.229 Read (02h): Supported 00:18:45.229 Compare (05h): Supported 00:18:45.229 Write Zeroes (08h): Supported LBA-Change 00:18:45.229 Dataset Management (09h): Supported LBA-Change 00:18:45.229 Copy (19h): Supported LBA-Change 00:18:45.229 00:18:45.229 Error Log 00:18:45.229 ========= 00:18:45.229 00:18:45.229 Arbitration 00:18:45.229 =========== 00:18:45.229 Arbitration Burst: 1 00:18:45.229 00:18:45.229 Power Management 00:18:45.229 ================ 00:18:45.229 Number of Power States: 1 00:18:45.229 Current Power State: Power State #0 00:18:45.229 Power State #0: 00:18:45.229 Max Power: 0.00 W 00:18:45.229 Non-Operational State: Operational 00:18:45.229 Entry Latency: Not Reported 00:18:45.229 Exit Latency: Not Reported 00:18:45.229 Relative Read Throughput: 0 00:18:45.229 Relative Read Latency: 0 00:18:45.229 Relative Write Throughput: 0 00:18:45.229 Relative Write Latency: 0 00:18:45.229 Idle Power: Not Reported 00:18:45.229 Active Power: Not Reported 00:18:45.229 Non-Operational Permissive Mode: Not Supported 00:18:45.229 00:18:45.229 Health Information 00:18:45.229 ================== 00:18:45.229 Critical Warnings: 00:18:45.229 Available Spare Space: OK 00:18:45.229 Temperature: OK 00:18:45.229 Device Reliability: OK 00:18:45.229 Read Only: No 00:18:45.229 Volatile Memory Backup: OK 00:18:45.229 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:45.229 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:45.229 Available Spare: 0% 00:18:45.229 Available Spare Threshold: 0% 00:18:45.229 Life Percentage [2024-12-10 04:06:39.336612] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ceb40 length 0x40 lkey 0x184d00 00:18:45.229 [2024-12-10 04:06:39.336619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.229 [2024-12-10 04:06:39.336636] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.229 [2024-12-10 04:06:39.336640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.336644] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.336665] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:18:45.230 [2024-12-10 04:06:39.336671] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 3193 doesn't match qid 00:18:45.230 [2024-12-10 04:06:39.336682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32576 cdw0:aa44e2e0 sqhd:7880 p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.336687] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 3193 doesn't match qid 00:18:45.230 [2024-12-10 04:06:39.336692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32576 cdw0:aa44e2e0 sqhd:7880 p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.336696] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 3193 doesn't match qid 00:18:45.230 [2024-12-10 04:06:39.336702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32576 cdw0:aa44e2e0 sqhd:7880 p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.336706] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 3193 doesn't match qid 00:18:45.230 [2024-12-10 04:06:39.336711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32576 cdw0:aa44e2e0 sqhd:7880 p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.336717] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.336723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.230 [2024-12-10 04:06:39.336738] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.230 [2024-12-10 04:06:39.336742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.336748] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.336754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.230 [2024-12-10 04:06:39.336757] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.336774] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.230 [2024-12-10 04:06:39.336778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.336782] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:18:45.230 [2024-12-10 04:06:39.336786] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:18:45.230 [2024-12-10 04:06:39.336790] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.336796] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.336803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.230 [2024-12-10 04:06:39.336825] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.230 [2024-12-10 04:06:39.336829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.336833] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.336840] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.336845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.230 [2024-12-10 04:06:39.336863] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.230 [2024-12-10 04:06:39.336867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.336871] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.336878] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.336884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.230 [2024-12-10 04:06:39.336905] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.230 [2024-12-10 04:06:39.336909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.336913] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.336920] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.336925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.230 [2024-12-10 04:06:39.336945] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.230 [2024-12-10 04:06:39.336949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.336954] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.336960] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.336966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.230 [2024-12-10 04:06:39.336985] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.230 [2024-12-10 04:06:39.336989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.336993] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.337000] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.337005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.230 [2024-12-10 04:06:39.337023] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.230 [2024-12-10 04:06:39.337028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.337032] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.337039] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.337045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.230 [2024-12-10 04:06:39.337062] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.230 [2024-12-10 04:06:39.337067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.337071] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.337077] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.337082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.230 [2024-12-10 04:06:39.337100] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.230 [2024-12-10 04:06:39.337104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.337108] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.337115] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.337120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.230 [2024-12-10 04:06:39.337137] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.230 [2024-12-10 04:06:39.337142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.337146] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.337152] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.337157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.230 [2024-12-10 04:06:39.337174] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.230 [2024-12-10 04:06:39.337178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.337182] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.337189] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.230 [2024-12-10 04:06:39.337194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.230 [2024-12-10 04:06:39.337213] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.230 [2024-12-10 04:06:39.337217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:45.230 [2024-12-10 04:06:39.337221] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337227] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337247] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337255] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337262] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337287] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337295] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337301] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337326] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337334] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337341] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337362] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337370] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337376] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337401] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337410] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337416] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337440] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337448] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337454] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337478] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337486] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337494] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337518] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337526] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337532] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337559] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337567] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337574] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337597] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337605] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337612] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337633] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337641] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337647] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337668] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337676] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337683] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337705] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337715] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337721] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337745] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337753] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337759] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337781] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337789] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337795] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337815] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337823] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337829] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337854] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.231 [2024-12-10 04:06:39.337858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:45.231 [2024-12-10 04:06:39.337862] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337869] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.231 [2024-12-10 04:06:39.337874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.231 [2024-12-10 04:06:39.337896] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.337900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.337904] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.337910] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.337915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.337937] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.337941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.337946] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.337952] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.337958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.337978] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.337982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.337986] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.337992] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.337998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.338018] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.338022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.338026] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338032] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.338056] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.338060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.338064] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338070] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.338096] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.338100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.338104] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338110] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.338137] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.338141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.338145] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338152] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.338174] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.338179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.338184] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338190] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.338210] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.338214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.338218] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338224] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.338244] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.338248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.338252] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338258] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.338290] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.338294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.338298] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338304] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.338323] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.338327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.338331] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338337] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.338360] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.338364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.338368] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338374] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.338398] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.338403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.338407] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338414] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.338438] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.338442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.338446] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338452] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.338480] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.338484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:45.232 [2024-12-10 04:06:39.338488] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338494] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.232 [2024-12-10 04:06:39.338500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.232 [2024-12-10 04:06:39.338519] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.232 [2024-12-10 04:06:39.338523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.338527] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338533] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.338555] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.338559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.338563] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338570] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.338594] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.338597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.338602] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338608] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.338632] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.338636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.338640] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338646] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.338672] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.338675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.338680] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338686] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.338710] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.338714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.338718] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338724] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.338750] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.338753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.338758] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338764] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.338788] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.338792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.338796] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338802] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.338826] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.338830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.338834] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338840] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.338864] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.338868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.338872] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338878] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.338903] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.338907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.338911] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338918] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.338943] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.338947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.338951] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338957] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.338981] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.338985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.338989] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.338995] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.339001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.339021] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.339025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.339029] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.339035] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.339040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.339062] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.339066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.339070] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.339076] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.339084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.339097] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.339101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.339105] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.339111] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.339116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.339135] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.339139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.339143] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.339149] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.339155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.233 [2024-12-10 04:06:39.339175] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.233 [2024-12-10 04:06:39.339179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:45.233 [2024-12-10 04:06:39.339183] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.339189] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.233 [2024-12-10 04:06:39.339195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.234 [2024-12-10 04:06:39.339208] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.234 [2024-12-10 04:06:39.339211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:45.234 [2024-12-10 04:06:39.339215] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x184d00 00:18:45.234 [2024-12-10 04:06:39.339222] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.234 [2024-12-10 04:06:39.339227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.234 [2024-12-10 04:06:39.339241] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.234 [2024-12-10 04:06:39.339245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:45.234 [2024-12-10 04:06:39.339249] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x184d00 00:18:45.234 [2024-12-10 04:06:39.339256] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.234 [2024-12-10 04:06:39.339261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.234 [2024-12-10 04:06:39.343270] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.234 [2024-12-10 04:06:39.343276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:45.234 [2024-12-10 04:06:39.343280] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x184d00 00:18:45.234 [2024-12-10 04:06:39.343287] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x184d00 00:18:45.234 [2024-12-10 04:06:39.343294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:45.234 [2024-12-10 04:06:39.343310] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:45.234 [2024-12-10 04:06:39.343314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:001c p:0 m:0 dnr:0 00:18:45.234 [2024-12-10 04:06:39.343318] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x184d00 00:18:45.234 [2024-12-10 04:06:39.343323] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:18:45.234 Used: 0% 00:18:45.234 Data Units Read: 0 00:18:45.234 Data Units Written: 0 00:18:45.234 Host Read Commands: 0 00:18:45.234 Host Write Commands: 0 00:18:45.234 Controller Busy Time: 0 minutes 00:18:45.234 Power Cycles: 0 00:18:45.234 Power On Hours: 0 hours 00:18:45.234 Unsafe Shutdowns: 0 00:18:45.234 Unrecoverable Media Errors: 0 00:18:45.234 Lifetime Error Log Entries: 0 00:18:45.234 Warning Temperature Time: 0 minutes 00:18:45.234 Critical Temperature Time: 0 minutes 00:18:45.234 00:18:45.234 Number of Queues 00:18:45.234 ================ 00:18:45.234 Number of I/O Submission Queues: 127 00:18:45.234 Number of I/O Completion Queues: 127 00:18:45.234 00:18:45.234 Active Namespaces 00:18:45.234 ================= 00:18:45.234 Namespace ID:1 00:18:45.234 Error Recovery Timeout: Unlimited 00:18:45.234 Command Set Identifier: NVM (00h) 00:18:45.234 Deallocate: Supported 00:18:45.234 Deallocated/Unwritten Error: Not Supported 00:18:45.234 Deallocated Read Value: Unknown 00:18:45.234 Deallocate in Write Zeroes: Not Supported 00:18:45.234 Deallocated Guard Field: 0xFFFF 00:18:45.234 Flush: Supported 00:18:45.234 Reservation: Supported 00:18:45.234 Namespace Sharing Capabilities: Multiple Controllers 00:18:45.234 Size (in LBAs): 131072 (0GiB) 00:18:45.234 Capacity (in LBAs): 131072 (0GiB) 00:18:45.234 Utilization (in LBAs): 131072 (0GiB) 00:18:45.234 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:45.234 EUI64: ABCDEF0123456789 00:18:45.234 UUID: 794ebc74-fa14-40f9-b3f1-69f7d597b204 00:18:45.234 Thin Provisioning: Not Supported 00:18:45.234 Per-NS Atomic Units: Yes 00:18:45.234 Atomic Boundary Size (Normal): 0 00:18:45.234 Atomic Boundary Size (PFail): 0 00:18:45.234 Atomic Boundary Offset: 0 00:18:45.234 Maximum Single Source Range Length: 65535 00:18:45.234 Maximum Copy Length: 65535 00:18:45.234 Maximum Source Range Count: 1 00:18:45.234 NGUID/EUI64 Never Reused: No 00:18:45.234 Namespace Write Protected: No 00:18:45.234 Number of LBA Formats: 1 00:18:45.234 Current LBA Format: LBA Format #00 00:18:45.234 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:45.234 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:45.234 rmmod nvme_rdma 00:18:45.234 rmmod nvme_fabrics 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 815008 ']' 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 815008 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 815008 ']' 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 815008 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 815008 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 815008' 00:18:45.234 killing process with pid 815008 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 815008 00:18:45.234 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 815008 00:18:45.492 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:45.492 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:45.492 00:18:45.492 real 0m7.136s 00:18:45.492 user 0m5.794s 00:18:45.492 sys 0m4.713s 00:18:45.492 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.492 04:06:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:45.492 ************************************ 00:18:45.492 END TEST nvmf_identify 00:18:45.492 ************************************ 00:18:45.492 04:06:39 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:18:45.492 04:06:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:45.492 04:06:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.492 04:06:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.492 ************************************ 00:18:45.492 START TEST nvmf_perf 00:18:45.492 ************************************ 00:18:45.492 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:18:45.750 * Looking for test storage... 00:18:45.750 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:45.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.750 --rc genhtml_branch_coverage=1 00:18:45.750 --rc genhtml_function_coverage=1 00:18:45.750 --rc genhtml_legend=1 00:18:45.750 --rc geninfo_all_blocks=1 00:18:45.750 --rc geninfo_unexecuted_blocks=1 00:18:45.750 00:18:45.750 ' 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:45.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.750 --rc genhtml_branch_coverage=1 00:18:45.750 --rc genhtml_function_coverage=1 00:18:45.750 --rc genhtml_legend=1 00:18:45.750 --rc geninfo_all_blocks=1 00:18:45.750 --rc geninfo_unexecuted_blocks=1 00:18:45.750 00:18:45.750 ' 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:45.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.750 --rc genhtml_branch_coverage=1 00:18:45.750 --rc genhtml_function_coverage=1 00:18:45.750 --rc genhtml_legend=1 00:18:45.750 --rc geninfo_all_blocks=1 00:18:45.750 --rc geninfo_unexecuted_blocks=1 00:18:45.750 00:18:45.750 ' 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:45.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.750 --rc genhtml_branch_coverage=1 00:18:45.750 --rc genhtml_function_coverage=1 00:18:45.750 --rc genhtml_legend=1 00:18:45.750 --rc geninfo_all_blocks=1 00:18:45.750 --rc geninfo_unexecuted_blocks=1 00:18:45.750 00:18:45.750 ' 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.750 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.750 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:45.751 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:45.751 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:45.751 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.751 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.751 04:06:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.751 04:06:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:45.751 04:06:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:45.751 04:06:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:18:45.751 04:06:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:52.299 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:52.299 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:52.299 Found net devices under 0000:18:00.0: mlx_0_0 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.299 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:52.300 Found net devices under 0000:18:00.1: mlx_0_1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:52.300 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:52.300 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:52.300 altname enp24s0f0np0 00:18:52.300 altname ens785f0np0 00:18:52.300 inet 192.168.100.8/24 scope global mlx_0_0 00:18:52.300 valid_lft forever preferred_lft forever 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:52.300 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:52.300 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:52.300 altname enp24s0f1np1 00:18:52.300 altname ens785f1np1 00:18:52.300 inet 192.168.100.9/24 scope global mlx_0_1 00:18:52.300 valid_lft forever preferred_lft forever 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:52.300 192.168.100.9' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:52.300 192.168.100.9' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:52.300 192.168.100.9' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=818433 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 818433 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 818433 ']' 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.300 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.301 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.301 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.301 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:52.301 [2024-12-10 04:06:45.712758] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:18:52.301 [2024-12-10 04:06:45.712809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.301 [2024-12-10 04:06:45.770272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.301 [2024-12-10 04:06:45.809431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.301 [2024-12-10 04:06:45.809464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.301 [2024-12-10 04:06:45.809471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.301 [2024-12-10 04:06:45.809476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.301 [2024-12-10 04:06:45.809481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.301 [2024-12-10 04:06:45.810842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.301 [2024-12-10 04:06:45.810940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.301 [2024-12-10 04:06:45.811039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.301 [2024-12-10 04:06:45.811040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.301 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.301 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:18:52.301 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:52.301 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:52.301 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:52.301 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.301 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:18:52.301 04:06:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:18:54.822 04:06:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:18:54.822 04:06:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:54.822 04:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:18:54.822 04:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:55.079 04:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:55.079 04:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:18:55.079 04:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:55.079 04:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:18:55.079 04:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:18:55.336 [2024-12-10 04:06:49.525595] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:18:55.336 [2024-12-10 04:06:49.543985] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c5ec60/0x1b34580) succeed. 00:18:55.336 [2024-12-10 04:06:49.552341] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b1f690/0x1bb4240) succeed. 00:18:55.336 04:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:55.593 04:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:55.593 04:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:55.850 04:06:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:55.850 04:06:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:55.850 04:06:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:56.106 [2024-12-10 04:06:50.386973] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:56.107 04:06:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:56.363 04:06:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:18:56.363 04:06:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:18:56.363 04:06:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:56.363 04:06:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:18:57.733 Initializing NVMe Controllers 00:18:57.733 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:18:57.733 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:18:57.733 Initialization complete. Launching workers. 00:18:57.733 ======================================================== 00:18:57.733 Latency(us) 00:18:57.733 Device Information : IOPS MiB/s Average min max 00:18:57.733 PCIE (0000:d8:00.0) NSID 1 from core 0: 106856.24 417.41 299.02 32.06 8188.20 00:18:57.733 ======================================================== 00:18:57.733 Total : 106856.24 417.41 299.02 32.06 8188.20 00:18:57.733 00:18:57.733 04:06:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:01.006 Initializing NVMe Controllers 00:19:01.006 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:01.006 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:01.006 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:01.006 Initialization complete. Launching workers. 00:19:01.006 ======================================================== 00:19:01.006 Latency(us) 00:19:01.006 Device Information : IOPS MiB/s Average min max 00:19:01.006 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6996.86 27.33 142.14 44.92 6068.47 00:19:01.006 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5400.60 21.10 184.96 68.76 6030.72 00:19:01.006 ======================================================== 00:19:01.006 Total : 12397.46 48.43 160.79 44.92 6068.47 00:19:01.006 00:19:01.006 04:06:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:04.279 Initializing NVMe Controllers 00:19:04.279 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:04.279 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:04.279 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:04.279 Initialization complete. Launching workers. 00:19:04.279 ======================================================== 00:19:04.279 Latency(us) 00:19:04.279 Device Information : IOPS MiB/s Average min max 00:19:04.279 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19225.23 75.10 1664.02 468.85 8513.12 00:19:04.279 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4017.00 15.69 8027.07 7758.82 11033.77 00:19:04.279 ======================================================== 00:19:04.279 Total : 23242.24 90.79 2763.76 468.85 11033.77 00:19:04.279 00:19:04.279 04:06:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:19:04.279 04:06:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:09.546 Initializing NVMe Controllers 00:19:09.546 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:09.546 Controller IO queue size 128, less than required. 00:19:09.546 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:09.546 Controller IO queue size 128, less than required. 00:19:09.546 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:09.546 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:09.546 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:09.546 Initialization complete. Launching workers. 00:19:09.546 ======================================================== 00:19:09.546 Latency(us) 00:19:09.546 Device Information : IOPS MiB/s Average min max 00:19:09.546 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4169.89 1042.47 30721.52 13810.45 69924.73 00:19:09.546 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4173.88 1043.47 30389.52 14301.04 50440.01 00:19:09.546 ======================================================== 00:19:09.546 Total : 8343.77 2085.94 30555.44 13810.45 69924.73 00:19:09.546 00:19:09.546 04:07:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:19:09.546 No valid NVMe controllers or AIO or URING devices found 00:19:09.546 Initializing NVMe Controllers 00:19:09.546 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:09.546 Controller IO queue size 128, less than required. 00:19:09.546 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:09.546 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:09.546 Controller IO queue size 128, less than required. 00:19:09.546 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:09.546 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:19:09.546 WARNING: Some requested NVMe devices were skipped 00:19:09.546 04:07:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:19:13.741 Initializing NVMe Controllers 00:19:13.741 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:13.741 Controller IO queue size 128, less than required. 00:19:13.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:13.741 Controller IO queue size 128, less than required. 00:19:13.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:13.741 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:13.741 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:13.741 Initialization complete. Launching workers. 00:19:13.741 00:19:13.741 ==================== 00:19:13.741 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:13.741 RDMA transport: 00:19:13.741 dev name: mlx5_0 00:19:13.741 polls: 431666 00:19:13.741 idle_polls: 427698 00:19:13.741 completions: 47402 00:19:13.741 queued_requests: 1 00:19:13.741 total_send_wrs: 23701 00:19:13.741 send_doorbell_updates: 3729 00:19:13.741 total_recv_wrs: 23828 00:19:13.741 recv_doorbell_updates: 3731 00:19:13.741 --------------------------------- 00:19:13.742 00:19:13.742 ==================== 00:19:13.742 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:13.742 RDMA transport: 00:19:13.742 dev name: mlx5_0 00:19:13.742 polls: 433129 00:19:13.742 idle_polls: 432850 00:19:13.742 completions: 20978 00:19:13.742 queued_requests: 1 00:19:13.742 total_send_wrs: 10489 00:19:13.742 send_doorbell_updates: 253 00:19:13.742 total_recv_wrs: 10616 00:19:13.742 recv_doorbell_updates: 254 00:19:13.742 --------------------------------- 00:19:13.742 ======================================================== 00:19:13.742 Latency(us) 00:19:13.742 Device Information : IOPS MiB/s Average min max 00:19:13.742 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5924.98 1481.24 21652.18 10484.06 53112.78 00:19:13.742 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2621.99 655.50 48845.83 25819.84 72111.46 00:19:13.742 ======================================================== 00:19:13.742 Total : 8546.97 2136.74 29994.49 10484.06 72111.46 00:19:13.742 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:13.742 rmmod nvme_rdma 00:19:13.742 rmmod nvme_fabrics 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 818433 ']' 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 818433 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 818433 ']' 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 818433 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.742 04:07:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 818433 00:19:13.742 04:07:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.742 04:07:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.742 04:07:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 818433' 00:19:13.742 killing process with pid 818433 00:19:13.742 04:07:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 818433 00:19:13.742 04:07:08 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 818433 00:19:17.934 04:07:11 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:17.934 04:07:11 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:17.934 00:19:17.934 real 0m32.067s 00:19:17.934 user 1m46.035s 00:19:17.934 sys 0m5.590s 00:19:17.934 04:07:11 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.934 04:07:11 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:17.934 ************************************ 00:19:17.934 END TEST nvmf_perf 00:19:17.934 ************************************ 00:19:17.934 04:07:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:19:17.934 04:07:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:17.934 04:07:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.934 04:07:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.934 ************************************ 00:19:17.934 START TEST nvmf_fio_host 00:19:17.934 ************************************ 00:19:17.934 04:07:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:19:17.934 * Looking for test storage... 00:19:17.934 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:17.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.934 --rc genhtml_branch_coverage=1 00:19:17.934 --rc genhtml_function_coverage=1 00:19:17.934 --rc genhtml_legend=1 00:19:17.934 --rc geninfo_all_blocks=1 00:19:17.934 --rc geninfo_unexecuted_blocks=1 00:19:17.934 00:19:17.934 ' 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:17.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.934 --rc genhtml_branch_coverage=1 00:19:17.934 --rc genhtml_function_coverage=1 00:19:17.934 --rc genhtml_legend=1 00:19:17.934 --rc geninfo_all_blocks=1 00:19:17.934 --rc geninfo_unexecuted_blocks=1 00:19:17.934 00:19:17.934 ' 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:17.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.934 --rc genhtml_branch_coverage=1 00:19:17.934 --rc genhtml_function_coverage=1 00:19:17.934 --rc genhtml_legend=1 00:19:17.934 --rc geninfo_all_blocks=1 00:19:17.934 --rc geninfo_unexecuted_blocks=1 00:19:17.934 00:19:17.934 ' 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:17.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.934 --rc genhtml_branch_coverage=1 00:19:17.934 --rc genhtml_function_coverage=1 00:19:17.934 --rc genhtml_legend=1 00:19:17.934 --rc geninfo_all_blocks=1 00:19:17.934 --rc geninfo_unexecuted_blocks=1 00:19:17.934 00:19:17.934 ' 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.934 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.935 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:19:17.935 04:07:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:23.209 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:23.209 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:23.209 Found net devices under 0000:18:00.0: mlx_0_0 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:23.209 Found net devices under 0000:18:00.1: mlx_0_1 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:23.209 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:23.210 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.210 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:19:23.210 altname enp24s0f0np0 00:19:23.210 altname ens785f0np0 00:19:23.210 inet 192.168.100.8/24 scope global mlx_0_0 00:19:23.210 valid_lft forever preferred_lft forever 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:23.210 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.210 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:19:23.210 altname enp24s0f1np1 00:19:23.210 altname ens785f1np1 00:19:23.210 inet 192.168.100.9/24 scope global mlx_0_1 00:19:23.210 valid_lft forever preferred_lft forever 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:23.210 192.168.100.9' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:23.210 192.168.100.9' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:23.210 192.168.100.9' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=826444 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 826444 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 826444 ']' 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.210 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.211 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.211 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.211 [2024-12-10 04:07:17.428333] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:23.211 [2024-12-10 04:07:17.428376] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.211 [2024-12-10 04:07:17.489405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.211 [2024-12-10 04:07:17.528820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.211 [2024-12-10 04:07:17.528855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.211 [2024-12-10 04:07:17.528862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.211 [2024-12-10 04:07:17.528868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.211 [2024-12-10 04:07:17.528872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.211 [2024-12-10 04:07:17.530147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.211 [2024-12-10 04:07:17.530170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.211 [2024-12-10 04:07:17.530254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.211 [2024-12-10 04:07:17.530256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.469 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.469 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:19:23.469 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:23.469 [2024-12-10 04:07:17.801662] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f680c0/0x1f6c5b0) succeed. 00:19:23.469 [2024-12-10 04:07:17.809895] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f69750/0x1fadc50) succeed. 00:19:23.728 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:19:23.728 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:23.728 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.728 04:07:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:23.985 Malloc1 00:19:23.985 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:24.243 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:24.243 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:24.501 [2024-12-10 04:07:18.727950] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:24.501 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:24.758 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:19:24.758 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:19:24.759 04:07:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:19:25.016 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:25.016 fio-3.35 00:19:25.016 Starting 1 thread 00:19:27.553 00:19:27.553 test: (groupid=0, jobs=1): err= 0: pid=826875: Tue Dec 10 04:07:21 2024 00:19:27.553 read: IOPS=18.8k, BW=73.4MiB/s (77.0MB/s)(147MiB/2004msec) 00:19:27.553 slat (nsec): min=1279, max=31389, avg=1411.73, stdev=483.75 00:19:27.553 clat (usec): min=2266, max=6163, avg=3380.49, stdev=86.36 00:19:27.553 lat (usec): min=2289, max=6164, avg=3381.91, stdev=86.32 00:19:27.553 clat percentiles (usec): 00:19:27.553 | 1.00th=[ 3326], 5.00th=[ 3359], 10.00th=[ 3359], 20.00th=[ 3359], 00:19:27.553 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3392], 60.00th=[ 3392], 00:19:27.553 | 70.00th=[ 3392], 80.00th=[ 3392], 90.00th=[ 3392], 95.00th=[ 3392], 00:19:27.553 | 99.00th=[ 3589], 99.50th=[ 3720], 99.90th=[ 4424], 99.95th=[ 5276], 00:19:27.553 | 99.99th=[ 6128] 00:19:27.553 bw ( KiB/s): min=73496, max=75880, per=100.00%, avg=75170.00, stdev=1121.97, samples=4 00:19:27.553 iops : min=18374, max=18970, avg=18792.50, stdev=280.49, samples=4 00:19:27.553 write: IOPS=18.8k, BW=73.4MiB/s (77.0MB/s)(147MiB/2004msec); 0 zone resets 00:19:27.553 slat (nsec): min=1304, max=17374, avg=1766.87, stdev=512.80 00:19:27.553 clat (usec): min=2300, max=6150, avg=3378.61, stdev=82.45 00:19:27.553 lat (usec): min=2312, max=6152, avg=3380.38, stdev=82.42 00:19:27.553 clat percentiles (usec): 00:19:27.553 | 1.00th=[ 3326], 5.00th=[ 3359], 10.00th=[ 3359], 20.00th=[ 3359], 00:19:27.553 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3392], 00:19:27.553 | 70.00th=[ 3392], 80.00th=[ 3392], 90.00th=[ 3392], 95.00th=[ 3392], 00:19:27.553 | 99.00th=[ 3556], 99.50th=[ 3720], 99.90th=[ 4359], 99.95th=[ 5276], 00:19:27.553 | 99.99th=[ 6128] 00:19:27.553 bw ( KiB/s): min=73448, max=75840, per=100.00%, avg=75200.00, stdev=1169.44, samples=4 00:19:27.553 iops : min=18362, max=18960, avg=18800.00, stdev=292.36, samples=4 00:19:27.553 lat (msec) : 4=99.84%, 10=0.16% 00:19:27.553 cpu : usr=99.50%, sys=0.10%, ctx=15, majf=0, minf=4 00:19:27.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:27.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:27.553 issued rwts: total=37650,37662,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.553 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:27.553 00:19:27.553 Run status group 0 (all jobs): 00:19:27.553 READ: bw=73.4MiB/s (77.0MB/s), 73.4MiB/s-73.4MiB/s (77.0MB/s-77.0MB/s), io=147MiB (154MB), run=2004-2004msec 00:19:27.553 WRITE: bw=73.4MiB/s (77.0MB/s), 73.4MiB/s-73.4MiB/s (77.0MB/s-77.0MB/s), io=147MiB (154MB), run=2004-2004msec 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:19:27.553 04:07:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:19:27.810 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:27.810 fio-3.35 00:19:27.810 Starting 1 thread 00:19:30.378 00:19:30.378 test: (groupid=0, jobs=1): err= 0: pid=827521: Tue Dec 10 04:07:24 2024 00:19:30.378 read: IOPS=15.1k, BW=235MiB/s (247MB/s)(461MiB/1960msec) 00:19:30.378 slat (nsec): min=2112, max=44807, avg=2529.02, stdev=1423.62 00:19:30.378 clat (usec): min=407, max=10098, avg=1599.42, stdev=1398.51 00:19:30.378 lat (usec): min=409, max=10103, avg=1601.95, stdev=1399.21 00:19:30.378 clat percentiles (usec): 00:19:30.378 | 1.00th=[ 635], 5.00th=[ 734], 10.00th=[ 783], 20.00th=[ 865], 00:19:30.378 | 30.00th=[ 922], 40.00th=[ 996], 50.00th=[ 1106], 60.00th=[ 1221], 00:19:30.378 | 70.00th=[ 1352], 80.00th=[ 1532], 90.00th=[ 4293], 95.00th=[ 4752], 00:19:30.378 | 99.00th=[ 6849], 99.50th=[ 7832], 99.90th=[ 9372], 99.95th=[ 9765], 00:19:30.378 | 99.99th=[10028] 00:19:30.378 bw ( KiB/s): min=100960, max=122624, per=48.03%, avg=115704.00, stdev=9978.03, samples=4 00:19:30.378 iops : min= 6310, max= 7664, avg=7231.50, stdev=623.63, samples=4 00:19:30.378 write: IOPS=8568, BW=134MiB/s (140MB/s)(235MiB/1757msec); 0 zone resets 00:19:30.378 slat (usec): min=24, max=117, avg=28.09, stdev= 6.07 00:19:30.378 clat (usec): min=3671, max=18631, avg=11942.28, stdev=1748.24 00:19:30.378 lat (usec): min=3702, max=18659, avg=11970.37, stdev=1747.91 00:19:30.378 clat percentiles (usec): 00:19:30.378 | 1.00th=[ 5866], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10552], 00:19:30.378 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:19:30.378 | 70.00th=[12780], 80.00th=[13304], 90.00th=[14091], 95.00th=[14615], 00:19:30.378 | 99.00th=[16057], 99.50th=[16581], 99.90th=[17957], 99.95th=[18482], 00:19:30.378 | 99.99th=[18482] 00:19:30.378 bw ( KiB/s): min=107616, max=126592, per=87.58%, avg=120072.00, stdev=8872.66, samples=4 00:19:30.378 iops : min= 6726, max= 7912, avg=7504.50, stdev=554.54, samples=4 00:19:30.378 lat (usec) : 500=0.02%, 750=4.36%, 1000=22.31% 00:19:30.378 lat (msec) : 2=30.12%, 4=2.14%, 10=11.12%, 20=29.94% 00:19:30.378 cpu : usr=96.36%, sys=1.70%, ctx=227, majf=0, minf=3 00:19:30.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:30.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.378 issued rwts: total=29512,15055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.378 00:19:30.378 Run status group 0 (all jobs): 00:19:30.378 READ: bw=235MiB/s (247MB/s), 235MiB/s-235MiB/s (247MB/s-247MB/s), io=461MiB (484MB), run=1960-1960msec 00:19:30.378 WRITE: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=235MiB (247MB), run=1757-1757msec 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:30.378 rmmod nvme_rdma 00:19:30.378 rmmod nvme_fabrics 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 826444 ']' 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 826444 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 826444 ']' 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 826444 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 826444 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 826444' 00:19:30.378 killing process with pid 826444 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 826444 00:19:30.378 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 826444 00:19:30.710 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:30.710 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:30.710 00:19:30.710 real 0m12.862s 00:19:30.710 user 0m52.712s 00:19:30.710 sys 0m4.842s 00:19:30.710 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.710 04:07:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.711 ************************************ 00:19:30.711 END TEST nvmf_fio_host 00:19:30.711 ************************************ 00:19:30.711 04:07:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:19:30.711 04:07:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:30.711 04:07:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.711 04:07:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.711 ************************************ 00:19:30.711 START TEST nvmf_failover 00:19:30.711 ************************************ 00:19:30.711 04:07:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:19:30.711 * Looking for test storage... 00:19:30.711 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:30.711 04:07:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:30.711 04:07:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:30.711 04:07:24 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:30.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.711 --rc genhtml_branch_coverage=1 00:19:30.711 --rc genhtml_function_coverage=1 00:19:30.711 --rc genhtml_legend=1 00:19:30.711 --rc geninfo_all_blocks=1 00:19:30.711 --rc geninfo_unexecuted_blocks=1 00:19:30.711 00:19:30.711 ' 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:30.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.711 --rc genhtml_branch_coverage=1 00:19:30.711 --rc genhtml_function_coverage=1 00:19:30.711 --rc genhtml_legend=1 00:19:30.711 --rc geninfo_all_blocks=1 00:19:30.711 --rc geninfo_unexecuted_blocks=1 00:19:30.711 00:19:30.711 ' 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:30.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.711 --rc genhtml_branch_coverage=1 00:19:30.711 --rc genhtml_function_coverage=1 00:19:30.711 --rc genhtml_legend=1 00:19:30.711 --rc geninfo_all_blocks=1 00:19:30.711 --rc geninfo_unexecuted_blocks=1 00:19:30.711 00:19:30.711 ' 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:30.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.711 --rc genhtml_branch_coverage=1 00:19:30.711 --rc genhtml_function_coverage=1 00:19:30.711 --rc genhtml_legend=1 00:19:30.711 --rc geninfo_all_blocks=1 00:19:30.711 --rc geninfo_unexecuted_blocks=1 00:19:30.711 00:19:30.711 ' 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:30.711 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:31.005 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:19:31.005 04:07:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:36.277 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:36.277 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:36.277 Found net devices under 0000:18:00.0: mlx_0_0 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:36.277 Found net devices under 0000:18:00.1: mlx_0_1 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:19:36.277 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:36.278 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:36.278 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:19:36.278 altname enp24s0f0np0 00:19:36.278 altname ens785f0np0 00:19:36.278 inet 192.168.100.8/24 scope global mlx_0_0 00:19:36.278 valid_lft forever preferred_lft forever 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:36.278 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:36.278 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:19:36.278 altname enp24s0f1np1 00:19:36.278 altname ens785f1np1 00:19:36.278 inet 192.168.100.9/24 scope global mlx_0_1 00:19:36.278 valid_lft forever preferred_lft forever 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:36.278 192.168.100.9' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:36.278 192.168.100.9' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:36.278 192.168.100.9' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=831274 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 831274 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 831274 ']' 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.278 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:36.278 [2024-12-10 04:07:30.467636] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:36.278 [2024-12-10 04:07:30.467681] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.278 [2024-12-10 04:07:30.525050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:36.279 [2024-12-10 04:07:30.563943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.279 [2024-12-10 04:07:30.563974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.279 [2024-12-10 04:07:30.563981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.279 [2024-12-10 04:07:30.563986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.279 [2024-12-10 04:07:30.563991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.279 [2024-12-10 04:07:30.565075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.279 [2024-12-10 04:07:30.565158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:36.279 [2024-12-10 04:07:30.565160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.279 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.279 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:36.279 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:36.279 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:36.279 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:36.537 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.537 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:36.537 [2024-12-10 04:07:30.866628] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1508800/0x150ccf0) succeed. 00:19:36.537 [2024-12-10 04:07:30.874677] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1509df0/0x154e390) succeed. 00:19:36.795 04:07:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:36.795 Malloc0 00:19:37.054 04:07:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:37.054 04:07:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:37.313 04:07:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:37.572 [2024-12-10 04:07:31.718919] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:37.572 04:07:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:37.572 [2024-12-10 04:07:31.911292] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:37.572 04:07:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:19:37.831 [2024-12-10 04:07:32.095943] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:19:37.831 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=831592 00:19:37.831 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:37.831 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:37.831 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 831592 /var/tmp/bdevperf.sock 00:19:37.831 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 831592 ']' 00:19:37.831 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.831 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.831 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.831 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.831 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:38.090 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:38.091 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:38.091 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:38.349 NVMe0n1 00:19:38.349 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:38.609 00:19:38.609 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:38.609 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=831607 00:19:38.609 04:07:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:39.545 04:07:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:39.803 04:07:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:43.092 04:07:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:43.092 00:19:43.092 04:07:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:43.350 04:07:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:46.638 04:07:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:46.638 [2024-12-10 04:07:40.665416] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:46.638 04:07:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:47.574 04:07:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:19:47.574 04:07:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 831607 00:19:54.145 { 00:19:54.145 "results": [ 00:19:54.145 { 00:19:54.145 "job": "NVMe0n1", 00:19:54.145 "core_mask": "0x1", 00:19:54.145 "workload": "verify", 00:19:54.145 "status": "finished", 00:19:54.145 "verify_range": { 00:19:54.145 "start": 0, 00:19:54.145 "length": 16384 00:19:54.145 }, 00:19:54.145 "queue_depth": 128, 00:19:54.145 "io_size": 4096, 00:19:54.145 "runtime": 15.004631, 00:19:54.145 "iops": 15100.404668398709, 00:19:54.145 "mibps": 58.98595573593246, 00:19:54.145 "io_failed": 4796, 00:19:54.145 "io_timeout": 0, 00:19:54.145 "avg_latency_us": 8276.321821226167, 00:19:54.145 "min_latency_us": 329.19703703703703, 00:19:54.145 "max_latency_us": 1031488.0948148149 00:19:54.145 } 00:19:54.145 ], 00:19:54.145 "core_count": 1 00:19:54.145 } 00:19:54.145 04:07:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 831592 00:19:54.145 04:07:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 831592 ']' 00:19:54.145 04:07:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 831592 00:19:54.145 04:07:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:54.145 04:07:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.145 04:07:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 831592 00:19:54.145 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:54.145 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:54.145 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 831592' 00:19:54.145 killing process with pid 831592 00:19:54.145 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 831592 00:19:54.145 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 831592 00:19:54.145 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:54.145 [2024-12-10 04:07:32.172159] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:54.145 [2024-12-10 04:07:32.172220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831592 ] 00:19:54.145 [2024-12-10 04:07:32.229762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.145 [2024-12-10 04:07:32.267873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.145 Running I/O for 15 seconds... 00:19:54.145 18751.00 IOPS, 73.25 MiB/s [2024-12-10T03:07:48.534Z] 10176.00 IOPS, 39.75 MiB/s [2024-12-10T03:07:48.534Z] [2024-12-10 04:07:35.029053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.145 [2024-12-10 04:07:35.029089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:19:54.145 [2024-12-10 04:07:35.029098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.145 [2024-12-10 04:07:35.029104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:19:54.145 [2024-12-10 04:07:35.029112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.145 [2024-12-10 04:07:35.029118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:19:54.145 [2024-12-10 04:07:35.029125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.145 [2024-12-10 04:07:35.029131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:19:54.145 [2024-12-10 04:07:35.030977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:54.145 [2024-12-10 04:07:35.030990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:54.145 [2024-12-10 04:07:35.030999] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:19:54.145 [2024-12-10 04:07:35.031007] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:54.145 [2024-12-10 04:07:35.031023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.145 [2024-12-10 04:07:35.031030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.145 [2024-12-10 04:07:35.031070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.031974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.031981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.032015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.032048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.032081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.146 [2024-12-10 04:07:35.032113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.032987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.032994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.033021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.033027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.033053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.033060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.033087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.033094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.033121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.033127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.033154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.033161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.033189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.033196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.146 [2024-12-10 04:07:35.033223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x180f00 00:19:54.146 [2024-12-10 04:07:35.033230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.033969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.033976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.034984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.034990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.035017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.035023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.035050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.035056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.035083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.035091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.035118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.035125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.035151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.035158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.035184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.035192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.035218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.035225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.035252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.035258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.035289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x180f00 00:19:54.147 [2024-12-10 04:07:35.035296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.048770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:54.147 [2024-12-10 04:07:35.048786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:54.147 [2024-12-10 04:07:35.048792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32496 len:8 PRP1 0x0 PRP2 0x0 00:19:54.147 [2024-12-10 04:07:35.048799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:35.048863] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:19:54.147 [2024-12-10 04:07:35.048892] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:19:54.147 [2024-12-10 04:07:35.051445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:54.147 [2024-12-10 04:07:35.094001] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:54.147 12130.33 IOPS, 47.38 MiB/s [2024-12-10T03:07:48.536Z] 13886.75 IOPS, 54.25 MiB/s [2024-12-10T03:07:48.536Z] 13102.40 IOPS, 51.18 MiB/s [2024-12-10T03:07:48.536Z] [2024-12-10 04:07:38.481443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.147 [2024-12-10 04:07:38.481480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.147 [2024-12-10 04:07:38.481496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.147 [2024-12-10 04:07:38.481507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.481521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.481534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.481875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.481890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.481904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.481917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.481930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.481943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.481956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.481969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.481992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.481998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.482092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.482105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.482118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.482131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.482253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.482269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.482282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.482296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.482312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.482325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.482338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.148 [2024-12-10 04:07:38.482351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.148 [2024-12-10 04:07:38.482425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x180f00 00:19:54.148 [2024-12-10 04:07:38.482431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:38.482780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.482988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.482994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.483188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:38.483194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.485060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:54.149 [2024-12-10 04:07:38.485072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:54.149 [2024-12-10 04:07:38.485078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16872 len:8 PRP1 0x0 PRP2 0x0 00:19:54.149 [2024-12-10 04:07:38.485085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:38.485127] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:19:54.149 [2024-12-10 04:07:38.485137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:54.149 [2024-12-10 04:07:38.487705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:54.149 [2024-12-10 04:07:38.501019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:19:54.149 [2024-12-10 04:07:38.541082] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:19:54.149 12197.83 IOPS, 47.65 MiB/s [2024-12-10T03:07:48.538Z] 13208.86 IOPS, 51.60 MiB/s [2024-12-10T03:07:48.538Z] 13953.88 IOPS, 54.51 MiB/s [2024-12-10T03:07:48.538Z] 14347.44 IOPS, 56.04 MiB/s [2024-12-10T03:07:48.538Z] [2024-12-10 04:07:42.866022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x180f00 00:19:54.149 [2024-12-10 04:07:42.866338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.149 [2024-12-10 04:07:42.866346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.149 [2024-12-10 04:07:42.866352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.866980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.866987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.866993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.867001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.867007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.867014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.150 [2024-12-10 04:07:42.867020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.867028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.867035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.867042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.867048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.867056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.867062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.867069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.867075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.867082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.867088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.867096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180f00 00:19:54.150 [2024-12-10 04:07:42.867102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.150 [2024-12-10 04:07:42.867110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x180f00 00:19:54.151 [2024-12-10 04:07:42.867758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.867792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:54.151 [2024-12-10 04:07:42.867798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:eb696000 sqhd:7210 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.869652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:54.151 [2024-12-10 04:07:42.869664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:54.151 [2024-12-10 04:07:42.869670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15192 len:8 PRP1 0x0 PRP2 0x0 00:19:54.151 [2024-12-10 04:07:42.869677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.151 [2024-12-10 04:07:42.869717] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:19:54.151 [2024-12-10 04:07:42.869727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:54.151 [2024-12-10 04:07:42.872313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:54.151 [2024-12-10 04:07:42.885441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:19:54.151 [2024-12-10 04:07:42.912985] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:19:54.151 12984.30 IOPS, 50.72 MiB/s [2024-12-10T03:07:48.540Z] 13560.55 IOPS, 52.97 MiB/s [2024-12-10T03:07:48.540Z] 14043.92 IOPS, 54.86 MiB/s [2024-12-10T03:07:48.540Z] 14450.69 IOPS, 56.45 MiB/s [2024-12-10T03:07:48.540Z] 14802.21 IOPS, 57.82 MiB/s 00:19:54.151 Latency(us) 00:19:54.151 [2024-12-10T03:07:48.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.151 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:54.151 Verification LBA range: start 0x0 length 0x4000 00:19:54.151 NVMe0n1 : 15.00 15100.40 58.99 319.63 0.00 8276.32 329.20 1031488.09 00:19:54.151 [2024-12-10T03:07:48.540Z] =================================================================================================================== 00:19:54.151 [2024-12-10T03:07:48.540Z] Total : 15100.40 58.99 319.63 0.00 8276.32 329.20 1031488.09 00:19:54.151 Received shutdown signal, test time was about 15.000000 seconds 00:19:54.151 00:19:54.151 Latency(us) 00:19:54.151 [2024-12-10T03:07:48.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.151 [2024-12-10T03:07:48.540Z] =================================================================================================================== 00:19:54.151 [2024-12-10T03:07:48.540Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=834482 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 834482 /var/tmp/bdevperf.sock 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 834482 ']' 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:54.151 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:54.410 [2024-12-10 04:07:48.608340] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:54.410 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:19:54.410 [2024-12-10 04:07:48.784903] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:19:54.667 04:07:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:54.667 NVMe0n1 00:19:54.925 04:07:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:54.925 00:19:55.184 04:07:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:55.184 00:19:55.184 04:07:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:55.184 04:07:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:55.443 04:07:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:55.702 04:07:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:58.982 04:07:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:58.982 04:07:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:58.982 04:07:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=835288 00:19:58.982 04:07:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:58.982 04:07:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 835288 00:19:59.915 { 00:19:59.915 "results": [ 00:19:59.915 { 00:19:59.915 "job": "NVMe0n1", 00:19:59.915 "core_mask": "0x1", 00:19:59.915 "workload": "verify", 00:19:59.915 "status": "finished", 00:19:59.915 "verify_range": { 00:19:59.915 "start": 0, 00:19:59.915 "length": 16384 00:19:59.915 }, 00:19:59.915 "queue_depth": 128, 00:19:59.915 "io_size": 4096, 00:19:59.915 "runtime": 1.01055, 00:19:59.915 "iops": 18999.55469793677, 00:19:59.915 "mibps": 74.2170105388155, 00:19:59.915 "io_failed": 0, 00:19:59.915 "io_timeout": 0, 00:19:59.915 "avg_latency_us": 6701.395120987655, 00:19:59.915 "min_latency_us": 2512.213333333333, 00:19:59.915 "max_latency_us": 12281.931851851852 00:19:59.915 } 00:19:59.915 ], 00:19:59.915 "core_count": 1 00:19:59.915 } 00:19:59.916 04:07:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:59.916 [2024-12-10 04:07:48.266599] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:19:59.916 [2024-12-10 04:07:48.266647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834482 ] 00:19:59.916 [2024-12-10 04:07:48.325545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.916 [2024-12-10 04:07:48.360966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.916 [2024-12-10 04:07:49.898324] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:19:59.916 [2024-12-10 04:07:49.898961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:19:59.916 [2024-12-10 04:07:49.898990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:19:59.916 [2024-12-10 04:07:49.914122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.916 [2024-12-10 04:07:49.929989] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:19:59.916 Running I/O for 1 seconds... 00:19:59.916 18952.00 IOPS, 74.03 MiB/s 00:19:59.916 Latency(us) 00:19:59.916 [2024-12-10T03:07:54.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.916 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:59.916 Verification LBA range: start 0x0 length 0x4000 00:19:59.916 NVMe0n1 : 1.01 18999.55 74.22 0.00 0.00 6701.40 2512.21 12281.93 00:19:59.916 [2024-12-10T03:07:54.305Z] =================================================================================================================== 00:19:59.916 [2024-12-10T03:07:54.305Z] Total : 18999.55 74.22 0.00 0.00 6701.40 2512.21 12281.93 00:19:59.916 04:07:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:59.916 04:07:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:00.173 04:07:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:00.430 04:07:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:00.430 04:07:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:00.430 04:07:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:00.688 04:07:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:03.968 04:07:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:03.968 04:07:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:03.968 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 834482 00:20:03.968 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 834482 ']' 00:20:03.968 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 834482 00:20:03.968 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:03.968 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.968 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 834482 00:20:03.968 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.968 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.968 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 834482' 00:20:03.968 killing process with pid 834482 00:20:03.968 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 834482 00:20:03.968 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 834482 00:20:04.227 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:04.227 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.227 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:04.227 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:04.227 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:04.227 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:04.227 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:20:04.227 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:04.227 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:04.227 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:20:04.227 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:04.227 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:04.227 rmmod nvme_rdma 00:20:04.227 rmmod nvme_fabrics 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 831274 ']' 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 831274 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 831274 ']' 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 831274 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 831274 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 831274' 00:20:04.485 killing process with pid 831274 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 831274 00:20:04.485 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 831274 00:20:04.743 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:04.743 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:04.743 00:20:04.743 real 0m34.014s 00:20:04.743 user 1m56.388s 00:20:04.744 sys 0m5.972s 00:20:04.744 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.744 04:07:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:04.744 ************************************ 00:20:04.744 END TEST nvmf_failover 00:20:04.744 ************************************ 00:20:04.744 04:07:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:20:04.744 04:07:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:04.744 04:07:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.744 04:07:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.744 ************************************ 00:20:04.744 START TEST nvmf_host_discovery 00:20:04.744 ************************************ 00:20:04.744 04:07:58 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:20:04.744 * Looking for test storage... 00:20:04.744 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:04.744 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:04.744 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:20:04.744 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:05.002 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:05.002 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:05.002 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:05.002 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:05.002 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.002 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:05.002 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:05.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.003 --rc genhtml_branch_coverage=1 00:20:05.003 --rc genhtml_function_coverage=1 00:20:05.003 --rc genhtml_legend=1 00:20:05.003 --rc geninfo_all_blocks=1 00:20:05.003 --rc geninfo_unexecuted_blocks=1 00:20:05.003 00:20:05.003 ' 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:05.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.003 --rc genhtml_branch_coverage=1 00:20:05.003 --rc genhtml_function_coverage=1 00:20:05.003 --rc genhtml_legend=1 00:20:05.003 --rc geninfo_all_blocks=1 00:20:05.003 --rc geninfo_unexecuted_blocks=1 00:20:05.003 00:20:05.003 ' 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:05.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.003 --rc genhtml_branch_coverage=1 00:20:05.003 --rc genhtml_function_coverage=1 00:20:05.003 --rc genhtml_legend=1 00:20:05.003 --rc geninfo_all_blocks=1 00:20:05.003 --rc geninfo_unexecuted_blocks=1 00:20:05.003 00:20:05.003 ' 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:05.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.003 --rc genhtml_branch_coverage=1 00:20:05.003 --rc genhtml_function_coverage=1 00:20:05.003 --rc genhtml_legend=1 00:20:05.003 --rc geninfo_all_blocks=1 00:20:05.003 --rc geninfo_unexecuted_blocks=1 00:20:05.003 00:20:05.003 ' 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:05.003 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:20:05.003 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:20:05.003 00:20:05.003 real 0m0.206s 00:20:05.003 user 0m0.126s 00:20:05.003 sys 0m0.095s 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.003 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:05.004 ************************************ 00:20:05.004 END TEST nvmf_host_discovery 00:20:05.004 ************************************ 00:20:05.004 04:07:59 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:20:05.004 04:07:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:05.004 04:07:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.004 04:07:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.004 ************************************ 00:20:05.004 START TEST nvmf_host_multipath_status 00:20:05.004 ************************************ 00:20:05.004 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:20:05.004 * Looking for test storage... 00:20:05.004 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:05.004 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:05.004 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:20:05.004 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:05.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.263 --rc genhtml_branch_coverage=1 00:20:05.263 --rc genhtml_function_coverage=1 00:20:05.263 --rc genhtml_legend=1 00:20:05.263 --rc geninfo_all_blocks=1 00:20:05.263 --rc geninfo_unexecuted_blocks=1 00:20:05.263 00:20:05.263 ' 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:05.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.263 --rc genhtml_branch_coverage=1 00:20:05.263 --rc genhtml_function_coverage=1 00:20:05.263 --rc genhtml_legend=1 00:20:05.263 --rc geninfo_all_blocks=1 00:20:05.263 --rc geninfo_unexecuted_blocks=1 00:20:05.263 00:20:05.263 ' 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:05.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.263 --rc genhtml_branch_coverage=1 00:20:05.263 --rc genhtml_function_coverage=1 00:20:05.263 --rc genhtml_legend=1 00:20:05.263 --rc geninfo_all_blocks=1 00:20:05.263 --rc geninfo_unexecuted_blocks=1 00:20:05.263 00:20:05.263 ' 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:05.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.263 --rc genhtml_branch_coverage=1 00:20:05.263 --rc genhtml_function_coverage=1 00:20:05.263 --rc genhtml_legend=1 00:20:05.263 --rc geninfo_all_blocks=1 00:20:05.263 --rc geninfo_unexecuted_blocks=1 00:20:05.263 00:20:05.263 ' 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.263 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:05.264 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:20:05.264 04:07:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:10.530 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:10.530 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:20:10.530 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:10.530 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:10.530 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:10.530 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:10.530 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:10.531 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:10.790 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:10.790 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:10.790 Found net devices under 0000:18:00.0: mlx_0_0 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:10.790 Found net devices under 0000:18:00.1: mlx_0_1 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:10.790 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:10.791 04:08:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:10.791 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:10.791 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:20:10.791 altname enp24s0f0np0 00:20:10.791 altname ens785f0np0 00:20:10.791 inet 192.168.100.8/24 scope global mlx_0_0 00:20:10.791 valid_lft forever preferred_lft forever 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:10.791 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:10.791 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:20:10.791 altname enp24s0f1np1 00:20:10.791 altname ens785f1np1 00:20:10.791 inet 192.168.100.9/24 scope global mlx_0_1 00:20:10.791 valid_lft forever preferred_lft forever 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:10.791 192.168.100.9' 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:10.791 192.168.100.9' 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:10.791 192.168.100.9' 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=839652 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 839652 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 839652 ']' 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.791 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:10.791 [2024-12-10 04:08:05.159910] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:10.791 [2024-12-10 04:08:05.159953] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.049 [2024-12-10 04:08:05.217149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:11.049 [2024-12-10 04:08:05.254929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.049 [2024-12-10 04:08:05.254961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.049 [2024-12-10 04:08:05.254967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.049 [2024-12-10 04:08:05.254972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.049 [2024-12-10 04:08:05.254977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.049 [2024-12-10 04:08:05.255965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.049 [2024-12-10 04:08:05.255969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.049 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.049 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:20:11.049 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:11.049 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:11.049 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:11.049 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.049 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=839652 00:20:11.049 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:11.307 [2024-12-10 04:08:05.565291] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5a4940/0x5a8e30) succeed. 00:20:11.307 [2024-12-10 04:08:05.573312] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5a5e90/0x5ea4d0) succeed. 00:20:11.307 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:11.565 Malloc0 00:20:11.565 04:08:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:11.823 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:11.823 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:12.092 [2024-12-10 04:08:06.343363] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:12.092 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:12.373 [2024-12-10 04:08:06.511593] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:12.373 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:12.373 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=839942 00:20:12.373 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:12.373 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 839942 /var/tmp/bdevperf.sock 00:20:12.373 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 839942 ']' 00:20:12.373 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.373 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.373 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.373 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.373 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:12.646 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.646 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:20:12.646 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:12.646 04:08:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:12.903 Nvme0n1 00:20:12.903 04:08:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:13.161 Nvme0n1 00:20:13.161 04:08:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:20:13.161 04:08:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:15.060 04:08:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:20:15.060 04:08:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:20:15.318 04:08:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:20:15.576 04:08:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:20:16.509 04:08:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:20:16.509 04:08:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:16.509 04:08:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.509 04:08:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:16.766 04:08:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.766 04:08:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:16.766 04:08:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.766 04:08:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:16.766 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:16.766 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:16.766 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:16.766 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.023 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.023 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:17.023 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:17.023 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.280 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.280 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:17.280 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.280 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:17.538 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.538 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:17.538 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.538 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:17.538 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.538 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:20:17.538 04:08:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:20:17.795 04:08:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:20:18.052 04:08:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:18.982 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:18.982 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:18.982 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.982 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:19.239 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:19.239 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:19.239 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.239 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:19.239 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.239 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:19.239 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.239 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:19.495 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.495 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:19.495 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.495 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:19.752 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.752 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:19.752 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.752 04:08:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:19.752 04:08:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.752 04:08:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:19.752 04:08:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.752 04:08:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:20.009 04:08:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.009 04:08:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:20.009 04:08:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:20:20.266 04:08:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:20:20.266 04:08:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:21.635 04:08:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:21.635 04:08:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:21.635 04:08:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.635 04:08:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:21.635 04:08:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:21.635 04:08:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:21.635 04:08:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:21.635 04:08:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.635 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:21.635 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:21.635 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.635 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:21.892 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:21.892 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:21.892 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.892 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:22.149 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.149 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:22.149 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.149 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:22.149 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.149 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:22.149 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.407 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:22.407 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.407 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:22.407 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:20:22.663 04:08:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:20:22.920 04:08:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:23.849 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:23.849 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:23.850 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.850 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:24.107 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:24.107 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:24.107 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.107 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:24.107 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:24.107 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:24.107 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.107 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:24.364 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:24.364 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:24.364 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.364 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:24.621 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:24.621 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:24.621 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:24.621 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.621 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:24.621 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:24.621 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.621 04:08:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:24.877 04:08:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:24.877 04:08:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:24.877 04:08:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:20:25.133 04:08:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:20:25.133 04:08:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:26.502 04:08:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:26.502 04:08:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:26.502 04:08:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.503 04:08:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:26.503 04:08:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:26.503 04:08:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:26.503 04:08:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.503 04:08:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:26.503 04:08:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:26.503 04:08:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:26.503 04:08:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:26.503 04:08:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.759 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.759 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:26.759 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:26.759 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:27.016 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:27.016 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:27.016 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:27.016 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:27.016 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:27.016 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:27.016 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:27.016 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:27.273 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:27.273 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:27.273 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:20:27.530 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:20:27.530 04:08:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:28.900 04:08:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:28.900 04:08:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:28.900 04:08:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.900 04:08:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:28.900 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:28.900 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:28.900 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.900 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:28.900 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:28.900 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:28.900 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.900 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:29.158 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.158 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:29.158 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.158 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:29.415 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.415 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:29.415 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.415 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:29.415 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:29.415 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:29.415 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.415 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:29.672 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.672 04:08:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:29.929 04:08:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:29.929 04:08:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:20:30.187 04:08:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:20:30.187 04:08:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:31.557 04:08:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:31.557 04:08:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:31.557 04:08:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:31.557 04:08:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:31.557 04:08:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:31.557 04:08:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:31.557 04:08:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:31.557 04:08:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:31.557 04:08:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:31.557 04:08:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:31.557 04:08:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:31.557 04:08:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:31.814 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:31.814 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:31.814 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:31.814 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:32.071 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:32.071 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:32.071 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:32.071 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:32.071 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:32.071 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:32.071 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:32.071 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:32.328 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:32.328 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:32.328 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:20:32.586 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:20:32.586 04:08:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:33.955 04:08:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:33.955 04:08:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:33.955 04:08:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.955 04:08:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:33.955 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:33.955 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:33.955 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.955 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:33.955 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:33.955 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:33.955 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.955 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:34.211 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:34.211 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:34.211 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:34.211 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:34.468 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:34.468 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:34.468 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:34.468 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:34.468 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:34.468 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:34.468 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:34.468 04:08:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:34.724 04:08:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:34.724 04:08:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:34.724 04:08:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:20:34.981 04:08:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:20:34.981 04:08:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:36.350 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:36.350 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:36.350 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.350 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:36.350 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.350 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:36.350 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.350 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:36.608 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.608 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:36.608 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.608 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:36.608 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.608 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:36.608 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.608 04:08:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:36.864 04:08:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.864 04:08:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:36.864 04:08:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.865 04:08:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:37.122 04:08:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:37.122 04:08:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:37.122 04:08:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:37.122 04:08:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:37.122 04:08:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:37.122 04:08:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:37.122 04:08:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:20:37.379 04:08:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:20:37.636 04:08:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:38.568 04:08:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:38.568 04:08:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:38.568 04:08:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:38.568 04:08:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:38.826 04:08:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:38.826 04:08:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:38.826 04:08:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:38.826 04:08:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:38.826 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:38.826 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:38.826 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:38.826 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:39.083 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:39.083 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:39.083 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.083 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:39.341 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:39.341 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:39.341 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.341 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 839942 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 839942 ']' 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 839942 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 839942 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 839942' 00:20:39.598 killing process with pid 839942 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 839942 00:20:39.598 04:08:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 839942 00:20:39.598 { 00:20:39.598 "results": [ 00:20:39.598 { 00:20:39.598 "job": "Nvme0n1", 00:20:39.598 "core_mask": "0x4", 00:20:39.598 "workload": "verify", 00:20:39.598 "status": "terminated", 00:20:39.598 "verify_range": { 00:20:39.598 "start": 0, 00:20:39.598 "length": 16384 00:20:39.598 }, 00:20:39.598 "queue_depth": 128, 00:20:39.598 "io_size": 4096, 00:20:39.598 "runtime": 26.393261, 00:20:39.598 "iops": 16692.97325555944, 00:20:39.598 "mibps": 65.20692677952906, 00:20:39.598 "io_failed": 0, 00:20:39.598 "io_timeout": 0, 00:20:39.598 "avg_latency_us": 7646.878812049449, 00:20:39.598 "min_latency_us": 682.6666666666666, 00:20:39.598 "max_latency_us": 3007471.3125925926 00:20:39.598 } 00:20:39.598 ], 00:20:39.598 "core_count": 1 00:20:39.598 } 00:20:39.862 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 839942 00:20:39.862 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:39.862 [2024-12-10 04:08:06.573658] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:20:39.862 [2024-12-10 04:08:06.573708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid839942 ] 00:20:39.862 [2024-12-10 04:08:06.629204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.862 [2024-12-10 04:08:06.667831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.862 Running I/O for 90 seconds... 00:20:39.862 19810.00 IOPS, 77.38 MiB/s [2024-12-10T03:08:34.251Z] 19890.50 IOPS, 77.70 MiB/s [2024-12-10T03:08:34.251Z] 19797.33 IOPS, 77.33 MiB/s [2024-12-10T03:08:34.251Z] 19808.00 IOPS, 77.38 MiB/s [2024-12-10T03:08:34.251Z] 19788.80 IOPS, 77.30 MiB/s [2024-12-10T03:08:34.251Z] 19787.67 IOPS, 77.30 MiB/s [2024-12-10T03:08:34.251Z] 19766.86 IOPS, 77.21 MiB/s [2024-12-10T03:08:34.251Z] 19752.88 IOPS, 77.16 MiB/s [2024-12-10T03:08:34.251Z] 19726.22 IOPS, 77.06 MiB/s [2024-12-10T03:08:34.251Z] 19712.00 IOPS, 77.00 MiB/s [2024-12-10T03:08:34.251Z] 19712.00 IOPS, 77.00 MiB/s [2024-12-10T03:08:34.251Z] [2024-12-10 04:08:19.284995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x180c00 00:20:39.862 [2024-12-10 04:08:19.285029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:39.862 [2024-12-10 04:08:19.285079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x180c00 00:20:39.862 [2024-12-10 04:08:19.285086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:39.862 [2024-12-10 04:08:19.285098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x180c00 00:20:39.862 [2024-12-10 04:08:19.285104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:39.862 [2024-12-10 04:08:19.285113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x180c00 00:20:39.862 [2024-12-10 04:08:19.285119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:39.862 [2024-12-10 04:08:19.285128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x180c00 00:20:39.862 [2024-12-10 04:08:19.285134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:39.862 [2024-12-10 04:08:19.285143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x180c00 00:20:39.862 [2024-12-10 04:08:19.285149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:39.862 [2024-12-10 04:08:19.285157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x180c00 00:20:39.862 [2024-12-10 04:08:19.285163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:39.862 [2024-12-10 04:08:19.285172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x180c00 00:20:39.862 [2024-12-10 04:08:19.285178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:39.862 [2024-12-10 04:08:19.285187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x180c00 00:20:39.862 [2024-12-10 04:08:19.285193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:39.863 [2024-12-10 04:08:19.285742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x180c00 00:20:39.863 [2024-12-10 04:08:19.285747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.285990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.285998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:39.864 [2024-12-10 04:08:19.286280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x180c00 00:20:39.864 [2024-12-10 04:08:19.286286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x180c00 00:20:39.865 [2024-12-10 04:08:19.286300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x180c00 00:20:39.865 [2024-12-10 04:08:19.286314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x180c00 00:20:39.865 [2024-12-10 04:08:19.286329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x180c00 00:20:39.865 [2024-12-10 04:08:19.286343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x180c00 00:20:39.865 [2024-12-10 04:08:19.286359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x180c00 00:20:39.865 [2024-12-10 04:08:19.286374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x180c00 00:20:39.865 [2024-12-10 04:08:19.286387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.286401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.286416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.286430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.286444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.286458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.286472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.286489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.286789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.286800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:39.865 [2024-12-10 04:08:19.287611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.865 [2024-12-10 04:08:19.287617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:19.287629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:19.287635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:19.287647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:19.287653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:19.287667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:19.287673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:19.287685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:19.287691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:19.287704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:19.287710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:19.287723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:19.287729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:39.866 19288.50 IOPS, 75.35 MiB/s [2024-12-10T03:08:34.255Z] 17804.77 IOPS, 69.55 MiB/s [2024-12-10T03:08:34.255Z] 16533.00 IOPS, 64.58 MiB/s [2024-12-10T03:08:34.255Z] 15760.40 IOPS, 61.56 MiB/s [2024-12-10T03:08:34.255Z] 16001.88 IOPS, 62.51 MiB/s [2024-12-10T03:08:34.255Z] 16172.94 IOPS, 63.18 MiB/s [2024-12-10T03:08:34.255Z] 16143.83 IOPS, 63.06 MiB/s [2024-12-10T03:08:34.255Z] 16114.11 IOPS, 62.95 MiB/s [2024-12-10T03:08:34.255Z] 16255.45 IOPS, 63.50 MiB/s [2024-12-10T03:08:34.255Z] 16423.90 IOPS, 64.16 MiB/s [2024-12-10T03:08:34.255Z] 16536.68 IOPS, 64.60 MiB/s [2024-12-10T03:08:34.255Z] 16489.35 IOPS, 64.41 MiB/s [2024-12-10T03:08:34.255Z] 16446.88 IOPS, 64.25 MiB/s [2024-12-10T03:08:34.255Z] [2024-12-10 04:08:31.778942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.778977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.778992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.779015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.779031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.779046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:31.779061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.779076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:31.779581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.779596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.779611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:31.779625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.779640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:31.779655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.779669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:31.779683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:31.779698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.779712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.779726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.779741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.779757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:31.779770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:31.779786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:31.779800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.779814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x180c00 00:20:39.866 [2024-12-10 04:08:31.779828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:31.779842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.866 [2024-12-10 04:08:31.779858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:39.866 [2024-12-10 04:08:31.779866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.779872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.779880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.779887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.779895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.779901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.779909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.779915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.779925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.779931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.779940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.779946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.779954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.779960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.779968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.779974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.779983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.779989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.779997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.780003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.780016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.780031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.780256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.780278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.780292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.780309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.780323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.780338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.780352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.780366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.780381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.780396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.780410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.780424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.780439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.780453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.780467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.780483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.780497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.780511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.780525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.780539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.780553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.780567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x180c00 00:20:39.867 [2024-12-10 04:08:31.780581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.867 [2024-12-10 04:08:31.780595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:39.867 [2024-12-10 04:08:31.780604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.780609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.780618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.780623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.780633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.780639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.780649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.780656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.780664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.780671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.780679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.780684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.781913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.781928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.781940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.781947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.781956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.781962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.782490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.782535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.782569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.782583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.782625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.782659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.782688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.782716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.782774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.782818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.782832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.868 [2024-12-10 04:08:31.782861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:39.868 [2024-12-10 04:08:31.782900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180c00 00:20:39.868 [2024-12-10 04:08:31.782906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.782914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.782920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.782928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.869 [2024-12-10 04:08:31.782934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.869 [2024-12-10 04:08:31.783173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.869 [2024-12-10 04:08:31.783202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.869 [2024-12-10 04:08:31.783299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.869 [2024-12-10 04:08:31.783314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.869 [2024-12-10 04:08:31.783344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.869 [2024-12-10 04:08:31.783387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.869 [2024-12-10 04:08:31.783401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.869 [2024-12-10 04:08:31.783461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.869 [2024-12-10 04:08:31.783489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.869 [2024-12-10 04:08:31.783517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.869 [2024-12-10 04:08:31.783560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.783584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.783590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.785110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.869 [2024-12-10 04:08:31.785129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.785140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.785147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.785156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.785161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.785170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.869 [2024-12-10 04:08:31.785176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:39.869 [2024-12-10 04:08:31.785184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x180c00 00:20:39.869 [2024-12-10 04:08:31.785190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.785299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.785752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.785782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.785824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.785855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.785871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.785885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.785899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.785913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.785929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.785957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.785986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.785995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.786000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.786009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.786014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.786023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.786029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.786037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.786044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.786053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.786058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.786067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.786072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.786081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.786088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.786097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.786103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.786111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.795312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.795325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x180c00 00:20:39.870 [2024-12-10 04:08:31.795332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.795341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.870 [2024-12-10 04:08:31.795347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:39.870 [2024-12-10 04:08:31.795355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.871 [2024-12-10 04:08:31.795414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.871 [2024-12-10 04:08:31.795431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.871 [2024-12-10 04:08:31.795461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.871 [2024-12-10 04:08:31.795476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.871 [2024-12-10 04:08:31.795517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.871 [2024-12-10 04:08:31.795560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.871 [2024-12-10 04:08:31.795651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.871 [2024-12-10 04:08:31.795699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.871 [2024-12-10 04:08:31.795762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.795778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.795789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.871 [2024-12-10 04:08:31.795795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.797356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.871 [2024-12-10 04:08:31.797375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.797394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.797402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.797410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.797417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.797425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.797431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:39.871 [2024-12-10 04:08:31.797440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x180c00 00:20:39.871 [2024-12-10 04:08:31.797446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.797460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.797757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.797772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x180c00 00:20:39.872 [2024-12-10 04:08:31.797787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x180c00 00:20:39.872 [2024-12-10 04:08:31.797801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180c00 00:20:39.872 [2024-12-10 04:08:31.797819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.797833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.797847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x180c00 00:20:39.872 [2024-12-10 04:08:31.797862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.797876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x180c00 00:20:39.872 [2024-12-10 04:08:31.797889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x180c00 00:20:39.872 [2024-12-10 04:08:31.797904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.797918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.797932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x180c00 00:20:39.872 [2024-12-10 04:08:31.797946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.797960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x180c00 00:20:39.872 [2024-12-10 04:08:31.797975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.797989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.797997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.798004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.798012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.798017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.798026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x180c00 00:20:39.872 [2024-12-10 04:08:31.798032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.798040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.798046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.798054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.798060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.798068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x180c00 00:20:39.872 [2024-12-10 04:08:31.798074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.798082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.798088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.798096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180c00 00:20:39.872 [2024-12-10 04:08:31.798103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.798111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.798116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.798125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.798130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.798139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.798147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:39.872 [2024-12-10 04:08:31.798155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.872 [2024-12-10 04:08:31.798160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:39.872 16532.28 IOPS, 64.58 MiB/s [2024-12-10T03:08:34.261Z] 16653.00 IOPS, 65.05 MiB/s [2024-12-10T03:08:34.261Z] Received shutdown signal, test time was about 26.393862 seconds 00:20:39.872 00:20:39.872 Latency(us) 00:20:39.872 [2024-12-10T03:08:34.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.872 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:39.872 Verification LBA range: start 0x0 length 0x4000 00:20:39.872 Nvme0n1 : 26.39 16692.97 65.21 0.00 0.00 7646.88 682.67 3007471.31 00:20:39.872 [2024-12-10T03:08:34.261Z] =================================================================================================================== 00:20:39.872 [2024-12-10T03:08:34.261Z] Total : 16692.97 65.21 0.00 0.00 7646.88 682.67 3007471.31 00:20:39.872 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:40.130 rmmod nvme_rdma 00:20:40.130 rmmod nvme_fabrics 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 839652 ']' 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 839652 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 839652 ']' 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 839652 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 839652 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 839652' 00:20:40.130 killing process with pid 839652 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 839652 00:20:40.130 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 839652 00:20:40.388 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:40.388 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:40.388 00:20:40.388 real 0m35.386s 00:20:40.388 user 1m42.665s 00:20:40.388 sys 0m7.412s 00:20:40.388 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.388 04:08:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:40.388 ************************************ 00:20:40.388 END TEST nvmf_host_multipath_status 00:20:40.388 ************************************ 00:20:40.388 04:08:34 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:20:40.388 04:08:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:40.388 04:08:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.388 04:08:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.388 ************************************ 00:20:40.388 START TEST nvmf_discovery_remove_ifc 00:20:40.388 ************************************ 00:20:40.388 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:20:40.388 * Looking for test storage... 00:20:40.388 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:40.388 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:40.388 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:20:40.388 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:40.646 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:40.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.647 --rc genhtml_branch_coverage=1 00:20:40.647 --rc genhtml_function_coverage=1 00:20:40.647 --rc genhtml_legend=1 00:20:40.647 --rc geninfo_all_blocks=1 00:20:40.647 --rc geninfo_unexecuted_blocks=1 00:20:40.647 00:20:40.647 ' 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:40.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.647 --rc genhtml_branch_coverage=1 00:20:40.647 --rc genhtml_function_coverage=1 00:20:40.647 --rc genhtml_legend=1 00:20:40.647 --rc geninfo_all_blocks=1 00:20:40.647 --rc geninfo_unexecuted_blocks=1 00:20:40.647 00:20:40.647 ' 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:40.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.647 --rc genhtml_branch_coverage=1 00:20:40.647 --rc genhtml_function_coverage=1 00:20:40.647 --rc genhtml_legend=1 00:20:40.647 --rc geninfo_all_blocks=1 00:20:40.647 --rc geninfo_unexecuted_blocks=1 00:20:40.647 00:20:40.647 ' 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:40.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.647 --rc genhtml_branch_coverage=1 00:20:40.647 --rc genhtml_function_coverage=1 00:20:40.647 --rc genhtml_legend=1 00:20:40.647 --rc geninfo_all_blocks=1 00:20:40.647 --rc geninfo_unexecuted_blocks=1 00:20:40.647 00:20:40.647 ' 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.647 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:20:40.647 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:20:40.647 00:20:40.647 real 0m0.147s 00:20:40.647 user 0m0.081s 00:20:40.647 sys 0m0.074s 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:40.647 ************************************ 00:20:40.647 END TEST nvmf_discovery_remove_ifc 00:20:40.647 ************************************ 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.647 ************************************ 00:20:40.647 START TEST nvmf_identify_kernel_target 00:20:40.647 ************************************ 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:20:40.647 * Looking for test storage... 00:20:40.647 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:20:40.647 04:08:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.904 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:40.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.904 --rc genhtml_branch_coverage=1 00:20:40.904 --rc genhtml_function_coverage=1 00:20:40.904 --rc genhtml_legend=1 00:20:40.905 --rc geninfo_all_blocks=1 00:20:40.905 --rc geninfo_unexecuted_blocks=1 00:20:40.905 00:20:40.905 ' 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:40.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.905 --rc genhtml_branch_coverage=1 00:20:40.905 --rc genhtml_function_coverage=1 00:20:40.905 --rc genhtml_legend=1 00:20:40.905 --rc geninfo_all_blocks=1 00:20:40.905 --rc geninfo_unexecuted_blocks=1 00:20:40.905 00:20:40.905 ' 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:40.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.905 --rc genhtml_branch_coverage=1 00:20:40.905 --rc genhtml_function_coverage=1 00:20:40.905 --rc genhtml_legend=1 00:20:40.905 --rc geninfo_all_blocks=1 00:20:40.905 --rc geninfo_unexecuted_blocks=1 00:20:40.905 00:20:40.905 ' 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:40.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.905 --rc genhtml_branch_coverage=1 00:20:40.905 --rc genhtml_function_coverage=1 00:20:40.905 --rc genhtml_legend=1 00:20:40.905 --rc geninfo_all_blocks=1 00:20:40.905 --rc geninfo_unexecuted_blocks=1 00:20:40.905 00:20:40.905 ' 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.905 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:40.905 04:08:35 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:46.173 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:46.173 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:46.173 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:46.174 Found net devices under 0000:18:00.0: mlx_0_0 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:46.174 Found net devices under 0000:18:00.1: mlx_0_1 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:46.174 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:46.174 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:20:46.174 altname enp24s0f0np0 00:20:46.174 altname ens785f0np0 00:20:46.174 inet 192.168.100.8/24 scope global mlx_0_0 00:20:46.174 valid_lft forever preferred_lft forever 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:46.174 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:46.174 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:20:46.174 altname enp24s0f1np1 00:20:46.174 altname ens785f1np1 00:20:46.174 inet 192.168.100.9/24 scope global mlx_0_1 00:20:46.174 valid_lft forever preferred_lft forever 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:46.174 192.168.100.9' 00:20:46.174 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:46.174 192.168.100.9' 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:46.175 192.168.100.9' 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:46.175 04:08:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:20:48.704 Waiting for block devices as requested 00:20:48.963 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:48.963 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:48.963 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:48.963 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:49.221 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:49.221 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:49.221 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:49.221 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:49.480 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:49.480 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:49.480 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:49.739 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:49.739 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:49.739 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:49.739 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:49.997 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:49.997 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:20:51.382 No valid GPT data, bailing 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:51.382 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:51.641 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:51.641 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:51.641 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:20:51.641 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:20:51.641 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:20:51.641 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:20:51.641 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:20:51.641 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:20:51.641 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:20:51.641 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:51.641 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:20:51.641 00:20:51.641 Discovery Log Number of Records 2, Generation counter 2 00:20:51.641 =====Discovery Log Entry 0====== 00:20:51.641 trtype: rdma 00:20:51.641 adrfam: ipv4 00:20:51.641 subtype: current discovery subsystem 00:20:51.641 treq: not specified, sq flow control disable supported 00:20:51.641 portid: 1 00:20:51.641 trsvcid: 4420 00:20:51.641 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:51.641 traddr: 192.168.100.8 00:20:51.641 eflags: none 00:20:51.641 rdma_prtype: not specified 00:20:51.641 rdma_qptype: connected 00:20:51.641 rdma_cms: rdma-cm 00:20:51.641 rdma_pkey: 0x0000 00:20:51.641 =====Discovery Log Entry 1====== 00:20:51.641 trtype: rdma 00:20:51.641 adrfam: ipv4 00:20:51.641 subtype: nvme subsystem 00:20:51.641 treq: not specified, sq flow control disable supported 00:20:51.641 portid: 1 00:20:51.641 trsvcid: 4420 00:20:51.641 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:51.641 traddr: 192.168.100.8 00:20:51.641 eflags: none 00:20:51.641 rdma_prtype: not specified 00:20:51.641 rdma_qptype: connected 00:20:51.641 rdma_cms: rdma-cm 00:20:51.641 rdma_pkey: 0x0000 00:20:51.641 04:08:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:20:51.641 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:51.901 ===================================================== 00:20:51.902 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:51.902 ===================================================== 00:20:51.902 Controller Capabilities/Features 00:20:51.902 ================================ 00:20:51.902 Vendor ID: 0000 00:20:51.902 Subsystem Vendor ID: 0000 00:20:51.902 Serial Number: 2425b969377cbfeb4c73 00:20:51.902 Model Number: Linux 00:20:51.902 Firmware Version: 6.8.9-20 00:20:51.902 Recommended Arb Burst: 0 00:20:51.902 IEEE OUI Identifier: 00 00 00 00:20:51.902 Multi-path I/O 00:20:51.902 May have multiple subsystem ports: No 00:20:51.902 May have multiple controllers: No 00:20:51.902 Associated with SR-IOV VF: No 00:20:51.902 Max Data Transfer Size: Unlimited 00:20:51.902 Max Number of Namespaces: 0 00:20:51.902 Max Number of I/O Queues: 1024 00:20:51.902 NVMe Specification Version (VS): 1.3 00:20:51.902 NVMe Specification Version (Identify): 1.3 00:20:51.902 Maximum Queue Entries: 128 00:20:51.902 Contiguous Queues Required: No 00:20:51.902 Arbitration Mechanisms Supported 00:20:51.902 Weighted Round Robin: Not Supported 00:20:51.902 Vendor Specific: Not Supported 00:20:51.902 Reset Timeout: 7500 ms 00:20:51.902 Doorbell Stride: 4 bytes 00:20:51.902 NVM Subsystem Reset: Not Supported 00:20:51.902 Command Sets Supported 00:20:51.902 NVM Command Set: Supported 00:20:51.902 Boot Partition: Not Supported 00:20:51.902 Memory Page Size Minimum: 4096 bytes 00:20:51.902 Memory Page Size Maximum: 4096 bytes 00:20:51.902 Persistent Memory Region: Not Supported 00:20:51.902 Optional Asynchronous Events Supported 00:20:51.902 Namespace Attribute Notices: Not Supported 00:20:51.902 Firmware Activation Notices: Not Supported 00:20:51.902 ANA Change Notices: Not Supported 00:20:51.902 PLE Aggregate Log Change Notices: Not Supported 00:20:51.902 LBA Status Info Alert Notices: Not Supported 00:20:51.902 EGE Aggregate Log Change Notices: Not Supported 00:20:51.902 Normal NVM Subsystem Shutdown event: Not Supported 00:20:51.902 Zone Descriptor Change Notices: Not Supported 00:20:51.902 Discovery Log Change Notices: Supported 00:20:51.902 Controller Attributes 00:20:51.902 128-bit Host Identifier: Not Supported 00:20:51.902 Non-Operational Permissive Mode: Not Supported 00:20:51.902 NVM Sets: Not Supported 00:20:51.902 Read Recovery Levels: Not Supported 00:20:51.902 Endurance Groups: Not Supported 00:20:51.902 Predictable Latency Mode: Not Supported 00:20:51.902 Traffic Based Keep ALive: Not Supported 00:20:51.902 Namespace Granularity: Not Supported 00:20:51.902 SQ Associations: Not Supported 00:20:51.902 UUID List: Not Supported 00:20:51.902 Multi-Domain Subsystem: Not Supported 00:20:51.902 Fixed Capacity Management: Not Supported 00:20:51.902 Variable Capacity Management: Not Supported 00:20:51.902 Delete Endurance Group: Not Supported 00:20:51.902 Delete NVM Set: Not Supported 00:20:51.902 Extended LBA Formats Supported: Not Supported 00:20:51.902 Flexible Data Placement Supported: Not Supported 00:20:51.902 00:20:51.902 Controller Memory Buffer Support 00:20:51.902 ================================ 00:20:51.902 Supported: No 00:20:51.902 00:20:51.902 Persistent Memory Region Support 00:20:51.902 ================================ 00:20:51.902 Supported: No 00:20:51.902 00:20:51.902 Admin Command Set Attributes 00:20:51.902 ============================ 00:20:51.902 Security Send/Receive: Not Supported 00:20:51.902 Format NVM: Not Supported 00:20:51.902 Firmware Activate/Download: Not Supported 00:20:51.902 Namespace Management: Not Supported 00:20:51.902 Device Self-Test: Not Supported 00:20:51.902 Directives: Not Supported 00:20:51.902 NVMe-MI: Not Supported 00:20:51.902 Virtualization Management: Not Supported 00:20:51.902 Doorbell Buffer Config: Not Supported 00:20:51.902 Get LBA Status Capability: Not Supported 00:20:51.902 Command & Feature Lockdown Capability: Not Supported 00:20:51.902 Abort Command Limit: 1 00:20:51.902 Async Event Request Limit: 1 00:20:51.902 Number of Firmware Slots: N/A 00:20:51.902 Firmware Slot 1 Read-Only: N/A 00:20:51.902 Firmware Activation Without Reset: N/A 00:20:51.902 Multiple Update Detection Support: N/A 00:20:51.902 Firmware Update Granularity: No Information Provided 00:20:51.902 Per-Namespace SMART Log: No 00:20:51.902 Asymmetric Namespace Access Log Page: Not Supported 00:20:51.902 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:51.902 Command Effects Log Page: Not Supported 00:20:51.902 Get Log Page Extended Data: Supported 00:20:51.902 Telemetry Log Pages: Not Supported 00:20:51.902 Persistent Event Log Pages: Not Supported 00:20:51.902 Supported Log Pages Log Page: May Support 00:20:51.902 Commands Supported & Effects Log Page: Not Supported 00:20:51.902 Feature Identifiers & Effects Log Page:May Support 00:20:51.902 NVMe-MI Commands & Effects Log Page: May Support 00:20:51.902 Data Area 4 for Telemetry Log: Not Supported 00:20:51.902 Error Log Page Entries Supported: 1 00:20:51.902 Keep Alive: Not Supported 00:20:51.902 00:20:51.902 NVM Command Set Attributes 00:20:51.902 ========================== 00:20:51.902 Submission Queue Entry Size 00:20:51.902 Max: 1 00:20:51.902 Min: 1 00:20:51.902 Completion Queue Entry Size 00:20:51.902 Max: 1 00:20:51.902 Min: 1 00:20:51.902 Number of Namespaces: 0 00:20:51.902 Compare Command: Not Supported 00:20:51.902 Write Uncorrectable Command: Not Supported 00:20:51.902 Dataset Management Command: Not Supported 00:20:51.902 Write Zeroes Command: Not Supported 00:20:51.902 Set Features Save Field: Not Supported 00:20:51.902 Reservations: Not Supported 00:20:51.902 Timestamp: Not Supported 00:20:51.902 Copy: Not Supported 00:20:51.902 Volatile Write Cache: Not Present 00:20:51.902 Atomic Write Unit (Normal): 1 00:20:51.902 Atomic Write Unit (PFail): 1 00:20:51.902 Atomic Compare & Write Unit: 1 00:20:51.902 Fused Compare & Write: Not Supported 00:20:51.902 Scatter-Gather List 00:20:51.902 SGL Command Set: Supported 00:20:51.902 SGL Keyed: Supported 00:20:51.902 SGL Bit Bucket Descriptor: Not Supported 00:20:51.902 SGL Metadata Pointer: Not Supported 00:20:51.902 Oversized SGL: Not Supported 00:20:51.902 SGL Metadata Address: Not Supported 00:20:51.902 SGL Offset: Supported 00:20:51.902 Transport SGL Data Block: Not Supported 00:20:51.902 Replay Protected Memory Block: Not Supported 00:20:51.902 00:20:51.902 Firmware Slot Information 00:20:51.902 ========================= 00:20:51.902 Active slot: 0 00:20:51.902 00:20:51.902 00:20:51.902 Error Log 00:20:51.902 ========= 00:20:51.902 00:20:51.902 Active Namespaces 00:20:51.902 ================= 00:20:51.902 Discovery Log Page 00:20:51.902 ================== 00:20:51.902 Generation Counter: 2 00:20:51.902 Number of Records: 2 00:20:51.902 Record Format: 0 00:20:51.902 00:20:51.902 Discovery Log Entry 0 00:20:51.902 ---------------------- 00:20:51.902 Transport Type: 1 (RDMA) 00:20:51.902 Address Family: 1 (IPv4) 00:20:51.902 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:51.902 Entry Flags: 00:20:51.902 Duplicate Returned Information: 0 00:20:51.902 Explicit Persistent Connection Support for Discovery: 0 00:20:51.902 Transport Requirements: 00:20:51.902 Secure Channel: Not Specified 00:20:51.902 Port ID: 1 (0x0001) 00:20:51.902 Controller ID: 65535 (0xffff) 00:20:51.902 Admin Max SQ Size: 32 00:20:51.902 Transport Service Identifier: 4420 00:20:51.902 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:51.902 Transport Address: 192.168.100.8 00:20:51.902 Transport Specific Address Subtype - RDMA 00:20:51.902 RDMA QP Service Type: 1 (Reliable Connected) 00:20:51.902 RDMA Provider Type: 1 (No provider specified) 00:20:51.902 RDMA CM Service: 1 (RDMA_CM) 00:20:51.902 Discovery Log Entry 1 00:20:51.902 ---------------------- 00:20:51.902 Transport Type: 1 (RDMA) 00:20:51.902 Address Family: 1 (IPv4) 00:20:51.902 Subsystem Type: 2 (NVM Subsystem) 00:20:51.902 Entry Flags: 00:20:51.902 Duplicate Returned Information: 0 00:20:51.902 Explicit Persistent Connection Support for Discovery: 0 00:20:51.902 Transport Requirements: 00:20:51.902 Secure Channel: Not Specified 00:20:51.902 Port ID: 1 (0x0001) 00:20:51.902 Controller ID: 65535 (0xffff) 00:20:51.902 Admin Max SQ Size: 32 00:20:51.902 Transport Service Identifier: 4420 00:20:51.902 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:51.902 Transport Address: 192.168.100.8 00:20:51.902 Transport Specific Address Subtype - RDMA 00:20:51.902 RDMA QP Service Type: 1 (Reliable Connected) 00:20:51.902 RDMA Provider Type: 1 (No provider specified) 00:20:51.902 RDMA CM Service: 1 (RDMA_CM) 00:20:51.902 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:51.902 get_feature(0x01) failed 00:20:51.902 get_feature(0x02) failed 00:20:51.902 get_feature(0x04) failed 00:20:51.902 ===================================================== 00:20:51.903 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:20:51.903 ===================================================== 00:20:51.903 Controller Capabilities/Features 00:20:51.903 ================================ 00:20:51.903 Vendor ID: 0000 00:20:51.903 Subsystem Vendor ID: 0000 00:20:51.903 Serial Number: 1150166e9782cb3bfde1 00:20:51.903 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:51.903 Firmware Version: 6.8.9-20 00:20:51.903 Recommended Arb Burst: 6 00:20:51.903 IEEE OUI Identifier: 00 00 00 00:20:51.903 Multi-path I/O 00:20:51.903 May have multiple subsystem ports: Yes 00:20:51.903 May have multiple controllers: Yes 00:20:51.903 Associated with SR-IOV VF: No 00:20:51.903 Max Data Transfer Size: 1048576 00:20:51.903 Max Number of Namespaces: 1024 00:20:51.903 Max Number of I/O Queues: 128 00:20:51.903 NVMe Specification Version (VS): 1.3 00:20:51.903 NVMe Specification Version (Identify): 1.3 00:20:51.903 Maximum Queue Entries: 128 00:20:51.903 Contiguous Queues Required: No 00:20:51.903 Arbitration Mechanisms Supported 00:20:51.903 Weighted Round Robin: Not Supported 00:20:51.903 Vendor Specific: Not Supported 00:20:51.903 Reset Timeout: 7500 ms 00:20:51.903 Doorbell Stride: 4 bytes 00:20:51.903 NVM Subsystem Reset: Not Supported 00:20:51.903 Command Sets Supported 00:20:51.903 NVM Command Set: Supported 00:20:51.903 Boot Partition: Not Supported 00:20:51.903 Memory Page Size Minimum: 4096 bytes 00:20:51.903 Memory Page Size Maximum: 4096 bytes 00:20:51.903 Persistent Memory Region: Not Supported 00:20:51.903 Optional Asynchronous Events Supported 00:20:51.903 Namespace Attribute Notices: Supported 00:20:51.903 Firmware Activation Notices: Not Supported 00:20:51.903 ANA Change Notices: Supported 00:20:51.903 PLE Aggregate Log Change Notices: Not Supported 00:20:51.903 LBA Status Info Alert Notices: Not Supported 00:20:51.903 EGE Aggregate Log Change Notices: Not Supported 00:20:51.903 Normal NVM Subsystem Shutdown event: Not Supported 00:20:51.903 Zone Descriptor Change Notices: Not Supported 00:20:51.903 Discovery Log Change Notices: Not Supported 00:20:51.903 Controller Attributes 00:20:51.903 128-bit Host Identifier: Supported 00:20:51.903 Non-Operational Permissive Mode: Not Supported 00:20:51.903 NVM Sets: Not Supported 00:20:51.903 Read Recovery Levels: Not Supported 00:20:51.903 Endurance Groups: Not Supported 00:20:51.903 Predictable Latency Mode: Not Supported 00:20:51.903 Traffic Based Keep ALive: Supported 00:20:51.903 Namespace Granularity: Not Supported 00:20:51.903 SQ Associations: Not Supported 00:20:51.903 UUID List: Not Supported 00:20:51.903 Multi-Domain Subsystem: Not Supported 00:20:51.903 Fixed Capacity Management: Not Supported 00:20:51.903 Variable Capacity Management: Not Supported 00:20:51.903 Delete Endurance Group: Not Supported 00:20:51.903 Delete NVM Set: Not Supported 00:20:51.903 Extended LBA Formats Supported: Not Supported 00:20:51.903 Flexible Data Placement Supported: Not Supported 00:20:51.903 00:20:51.903 Controller Memory Buffer Support 00:20:51.903 ================================ 00:20:51.903 Supported: No 00:20:51.903 00:20:51.903 Persistent Memory Region Support 00:20:51.903 ================================ 00:20:51.903 Supported: No 00:20:51.903 00:20:51.903 Admin Command Set Attributes 00:20:51.903 ============================ 00:20:51.903 Security Send/Receive: Not Supported 00:20:51.903 Format NVM: Not Supported 00:20:51.903 Firmware Activate/Download: Not Supported 00:20:51.903 Namespace Management: Not Supported 00:20:51.903 Device Self-Test: Not Supported 00:20:51.903 Directives: Not Supported 00:20:51.903 NVMe-MI: Not Supported 00:20:51.903 Virtualization Management: Not Supported 00:20:51.903 Doorbell Buffer Config: Not Supported 00:20:51.903 Get LBA Status Capability: Not Supported 00:20:51.903 Command & Feature Lockdown Capability: Not Supported 00:20:51.903 Abort Command Limit: 4 00:20:51.903 Async Event Request Limit: 4 00:20:51.903 Number of Firmware Slots: N/A 00:20:51.903 Firmware Slot 1 Read-Only: N/A 00:20:51.903 Firmware Activation Without Reset: N/A 00:20:51.903 Multiple Update Detection Support: N/A 00:20:51.903 Firmware Update Granularity: No Information Provided 00:20:51.903 Per-Namespace SMART Log: Yes 00:20:51.903 Asymmetric Namespace Access Log Page: Supported 00:20:51.903 ANA Transition Time : 10 sec 00:20:51.903 00:20:51.903 Asymmetric Namespace Access Capabilities 00:20:51.903 ANA Optimized State : Supported 00:20:51.903 ANA Non-Optimized State : Supported 00:20:51.903 ANA Inaccessible State : Supported 00:20:51.903 ANA Persistent Loss State : Supported 00:20:51.903 ANA Change State : Supported 00:20:51.903 ANAGRPID is not changed : No 00:20:51.903 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:51.903 00:20:51.903 ANA Group Identifier Maximum : 128 00:20:51.903 Number of ANA Group Identifiers : 128 00:20:51.903 Max Number of Allowed Namespaces : 1024 00:20:51.903 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:51.903 Command Effects Log Page: Supported 00:20:51.903 Get Log Page Extended Data: Supported 00:20:51.903 Telemetry Log Pages: Not Supported 00:20:51.903 Persistent Event Log Pages: Not Supported 00:20:51.903 Supported Log Pages Log Page: May Support 00:20:51.903 Commands Supported & Effects Log Page: Not Supported 00:20:51.903 Feature Identifiers & Effects Log Page:May Support 00:20:51.903 NVMe-MI Commands & Effects Log Page: May Support 00:20:51.903 Data Area 4 for Telemetry Log: Not Supported 00:20:51.903 Error Log Page Entries Supported: 128 00:20:51.903 Keep Alive: Supported 00:20:51.903 Keep Alive Granularity: 1000 ms 00:20:51.903 00:20:51.903 NVM Command Set Attributes 00:20:51.903 ========================== 00:20:51.903 Submission Queue Entry Size 00:20:51.903 Max: 64 00:20:51.903 Min: 64 00:20:51.903 Completion Queue Entry Size 00:20:51.903 Max: 16 00:20:51.903 Min: 16 00:20:51.903 Number of Namespaces: 1024 00:20:51.903 Compare Command: Not Supported 00:20:51.903 Write Uncorrectable Command: Not Supported 00:20:51.903 Dataset Management Command: Supported 00:20:51.903 Write Zeroes Command: Supported 00:20:51.903 Set Features Save Field: Not Supported 00:20:51.903 Reservations: Not Supported 00:20:51.903 Timestamp: Not Supported 00:20:51.903 Copy: Not Supported 00:20:51.903 Volatile Write Cache: Present 00:20:51.903 Atomic Write Unit (Normal): 1 00:20:51.903 Atomic Write Unit (PFail): 1 00:20:51.903 Atomic Compare & Write Unit: 1 00:20:51.903 Fused Compare & Write: Not Supported 00:20:51.903 Scatter-Gather List 00:20:51.903 SGL Command Set: Supported 00:20:51.903 SGL Keyed: Supported 00:20:51.903 SGL Bit Bucket Descriptor: Not Supported 00:20:51.903 SGL Metadata Pointer: Not Supported 00:20:51.903 Oversized SGL: Not Supported 00:20:51.903 SGL Metadata Address: Not Supported 00:20:51.903 SGL Offset: Supported 00:20:51.903 Transport SGL Data Block: Not Supported 00:20:51.903 Replay Protected Memory Block: Not Supported 00:20:51.903 00:20:51.903 Firmware Slot Information 00:20:51.903 ========================= 00:20:51.903 Active slot: 0 00:20:51.903 00:20:51.903 Asymmetric Namespace Access 00:20:51.903 =========================== 00:20:51.903 Change Count : 0 00:20:51.903 Number of ANA Group Descriptors : 1 00:20:51.903 ANA Group Descriptor : 0 00:20:51.903 ANA Group ID : 1 00:20:51.903 Number of NSID Values : 1 00:20:51.903 Change Count : 0 00:20:51.903 ANA State : 1 00:20:51.903 Namespace Identifier : 1 00:20:51.903 00:20:51.903 Commands Supported and Effects 00:20:51.903 ============================== 00:20:51.903 Admin Commands 00:20:51.903 -------------- 00:20:51.903 Get Log Page (02h): Supported 00:20:51.903 Identify (06h): Supported 00:20:51.903 Abort (08h): Supported 00:20:51.903 Set Features (09h): Supported 00:20:51.903 Get Features (0Ah): Supported 00:20:51.903 Asynchronous Event Request (0Ch): Supported 00:20:51.903 Keep Alive (18h): Supported 00:20:51.903 I/O Commands 00:20:51.903 ------------ 00:20:51.903 Flush (00h): Supported 00:20:51.903 Write (01h): Supported LBA-Change 00:20:51.903 Read (02h): Supported 00:20:51.903 Write Zeroes (08h): Supported LBA-Change 00:20:51.903 Dataset Management (09h): Supported 00:20:51.903 00:20:51.903 Error Log 00:20:51.903 ========= 00:20:51.903 Entry: 0 00:20:51.903 Error Count: 0x3 00:20:51.903 Submission Queue Id: 0x0 00:20:51.903 Command Id: 0x5 00:20:51.903 Phase Bit: 0 00:20:51.903 Status Code: 0x2 00:20:51.903 Status Code Type: 0x0 00:20:51.903 Do Not Retry: 1 00:20:51.903 Error Location: 0x28 00:20:51.903 LBA: 0x0 00:20:51.903 Namespace: 0x0 00:20:51.903 Vendor Log Page: 0x0 00:20:51.903 ----------- 00:20:51.903 Entry: 1 00:20:51.903 Error Count: 0x2 00:20:51.903 Submission Queue Id: 0x0 00:20:51.903 Command Id: 0x5 00:20:51.903 Phase Bit: 0 00:20:51.903 Status Code: 0x2 00:20:51.903 Status Code Type: 0x0 00:20:51.904 Do Not Retry: 1 00:20:51.904 Error Location: 0x28 00:20:51.904 LBA: 0x0 00:20:51.904 Namespace: 0x0 00:20:51.904 Vendor Log Page: 0x0 00:20:51.904 ----------- 00:20:51.904 Entry: 2 00:20:51.904 Error Count: 0x1 00:20:51.904 Submission Queue Id: 0x0 00:20:51.904 Command Id: 0x0 00:20:51.904 Phase Bit: 0 00:20:51.904 Status Code: 0x2 00:20:51.904 Status Code Type: 0x0 00:20:51.904 Do Not Retry: 1 00:20:51.904 Error Location: 0x28 00:20:51.904 LBA: 0x0 00:20:51.904 Namespace: 0x0 00:20:51.904 Vendor Log Page: 0x0 00:20:51.904 00:20:51.904 Number of Queues 00:20:51.904 ================ 00:20:51.904 Number of I/O Submission Queues: 128 00:20:51.904 Number of I/O Completion Queues: 128 00:20:51.904 00:20:51.904 ZNS Specific Controller Data 00:20:51.904 ============================ 00:20:51.904 Zone Append Size Limit: 0 00:20:51.904 00:20:51.904 00:20:51.904 Active Namespaces 00:20:51.904 ================= 00:20:51.904 get_feature(0x05) failed 00:20:51.904 Namespace ID:1 00:20:51.904 Command Set Identifier: NVM (00h) 00:20:51.904 Deallocate: Supported 00:20:51.904 Deallocated/Unwritten Error: Not Supported 00:20:51.904 Deallocated Read Value: Unknown 00:20:51.904 Deallocate in Write Zeroes: Not Supported 00:20:51.904 Deallocated Guard Field: 0xFFFF 00:20:51.904 Flush: Supported 00:20:51.904 Reservation: Not Supported 00:20:51.904 Namespace Sharing Capabilities: Multiple Controllers 00:20:51.904 Size (in LBAs): 7814037168 (3726GiB) 00:20:51.904 Capacity (in LBAs): 7814037168 (3726GiB) 00:20:51.904 Utilization (in LBAs): 7814037168 (3726GiB) 00:20:51.904 UUID: d788ad81-8a0c-44f7-9820-10eecfdaf8ff 00:20:51.904 Thin Provisioning: Not Supported 00:20:51.904 Per-NS Atomic Units: Yes 00:20:51.904 Atomic Boundary Size (Normal): 0 00:20:51.904 Atomic Boundary Size (PFail): 0 00:20:51.904 Atomic Boundary Offset: 0 00:20:51.904 NGUID/EUI64 Never Reused: No 00:20:51.904 ANA group ID: 1 00:20:51.904 Namespace Write Protected: No 00:20:51.904 Number of LBA Formats: 1 00:20:51.904 Current LBA Format: LBA Format #00 00:20:51.904 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:51.904 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:51.904 rmmod nvme_rdma 00:20:51.904 rmmod nvme_fabrics 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:20:51.904 04:08:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:20:55.192 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:55.192 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:58.477 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:20:59.853 00:20:59.853 real 0m18.944s 00:20:59.853 user 0m4.798s 00:20:59.853 sys 0m9.910s 00:20:59.853 04:08:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.853 04:08:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.853 ************************************ 00:20:59.853 END TEST nvmf_identify_kernel_target 00:20:59.853 ************************************ 00:20:59.853 04:08:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:20:59.853 04:08:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:59.853 04:08:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.853 04:08:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.853 ************************************ 00:20:59.853 START TEST nvmf_auth_host 00:20:59.853 ************************************ 00:20:59.853 04:08:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:20:59.853 * Looking for test storage... 00:20:59.853 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:59.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.853 --rc genhtml_branch_coverage=1 00:20:59.853 --rc genhtml_function_coverage=1 00:20:59.853 --rc genhtml_legend=1 00:20:59.853 --rc geninfo_all_blocks=1 00:20:59.853 --rc geninfo_unexecuted_blocks=1 00:20:59.853 00:20:59.853 ' 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:59.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.853 --rc genhtml_branch_coverage=1 00:20:59.853 --rc genhtml_function_coverage=1 00:20:59.853 --rc genhtml_legend=1 00:20:59.853 --rc geninfo_all_blocks=1 00:20:59.853 --rc geninfo_unexecuted_blocks=1 00:20:59.853 00:20:59.853 ' 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:59.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.853 --rc genhtml_branch_coverage=1 00:20:59.853 --rc genhtml_function_coverage=1 00:20:59.853 --rc genhtml_legend=1 00:20:59.853 --rc geninfo_all_blocks=1 00:20:59.853 --rc geninfo_unexecuted_blocks=1 00:20:59.853 00:20:59.853 ' 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:59.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.853 --rc genhtml_branch_coverage=1 00:20:59.853 --rc genhtml_function_coverage=1 00:20:59.853 --rc genhtml_legend=1 00:20:59.853 --rc geninfo_all_blocks=1 00:20:59.853 --rc geninfo_unexecuted_blocks=1 00:20:59.853 00:20:59.853 ' 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.853 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:59.854 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:20:59.854 04:08:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:05.123 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:05.123 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:05.123 Found net devices under 0000:18:00.0: mlx_0_0 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:05.123 Found net devices under 0000:18:00.1: mlx_0_1 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:05.123 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:05.124 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:05.124 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:21:05.124 altname enp24s0f0np0 00:21:05.124 altname ens785f0np0 00:21:05.124 inet 192.168.100.8/24 scope global mlx_0_0 00:21:05.124 valid_lft forever preferred_lft forever 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:05.124 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:05.124 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:21:05.124 altname enp24s0f1np1 00:21:05.124 altname ens785f1np1 00:21:05.124 inet 192.168.100.9/24 scope global mlx_0_1 00:21:05.124 valid_lft forever preferred_lft forever 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:05.124 192.168.100.9' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:05.124 192.168.100.9' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:05.124 192.168.100.9' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=855247 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 855247 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 855247 ']' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.124 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=26607c76722d7d5dd1ba8f76852fd05e 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PWy 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 26607c76722d7d5dd1ba8f76852fd05e 0 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 26607c76722d7d5dd1ba8f76852fd05e 0 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=26607c76722d7d5dd1ba8f76852fd05e 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PWy 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PWy 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.PWy 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=df342ccb685930ef0da7b90a71949bbbcbd33d098c4621e7a5e9c1c11d9a51b3 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.FYh 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key df342ccb685930ef0da7b90a71949bbbcbd33d098c4621e7a5e9c1c11d9a51b3 3 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 df342ccb685930ef0da7b90a71949bbbcbd33d098c4621e7a5e9c1c11d9a51b3 3 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=df342ccb685930ef0da7b90a71949bbbcbd33d098c4621e7a5e9c1c11d9a51b3 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:05.383 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.FYh 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.FYh 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.FYh 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5653e3b7ee49b25fac0ef75484f321a2727fdb61dbd0dafc 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.YtB 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5653e3b7ee49b25fac0ef75484f321a2727fdb61dbd0dafc 0 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5653e3b7ee49b25fac0ef75484f321a2727fdb61dbd0dafc 0 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5653e3b7ee49b25fac0ef75484f321a2727fdb61dbd0dafc 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.YtB 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.YtB 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.YtB 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bbac4521075959a5c01a84c10582011ff4b946cc67c9190d 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.oUw 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bbac4521075959a5c01a84c10582011ff4b946cc67c9190d 2 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bbac4521075959a5c01a84c10582011ff4b946cc67c9190d 2 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bbac4521075959a5c01a84c10582011ff4b946cc67c9190d 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.oUw 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.oUw 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.oUw 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=51e81d76fd11296861ef26574e15c76e 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ODM 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 51e81d76fd11296861ef26574e15c76e 1 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 51e81d76fd11296861ef26574e15c76e 1 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=51e81d76fd11296861ef26574e15c76e 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ODM 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ODM 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ODM 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8b022a5db6097fa4b10cba4b966186b0 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.H8R 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8b022a5db6097fa4b10cba4b966186b0 1 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8b022a5db6097fa4b10cba4b966186b0 1 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8b022a5db6097fa4b10cba4b966186b0 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:05.642 04:08:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:05.642 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.H8R 00:21:05.642 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.H8R 00:21:05.642 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.H8R 00:21:05.642 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:21:05.642 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.642 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.642 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:05.642 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:05.642 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:05.642 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:05.642 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d844756f31d98b1862b31ab5812fe679bd616db0541953f9 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.sa6 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d844756f31d98b1862b31ab5812fe679bd616db0541953f9 2 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d844756f31d98b1862b31ab5812fe679bd616db0541953f9 2 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d844756f31d98b1862b31ab5812fe679bd616db0541953f9 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.sa6 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.sa6 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.sa6 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f982f13d755657d2fb31aceca3d012b9 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vEv 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f982f13d755657d2fb31aceca3d012b9 0 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f982f13d755657d2fb31aceca3d012b9 0 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f982f13d755657d2fb31aceca3d012b9 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vEv 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vEv 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.vEv 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=171234d48dc81cfdaffa149ae1f645583a8beb2e0970d9bce4ebbce8b9bd9acc 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.yeK 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 171234d48dc81cfdaffa149ae1f645583a8beb2e0970d9bce4ebbce8b9bd9acc 3 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 171234d48dc81cfdaffa149ae1f645583a8beb2e0970d9bce4ebbce8b9bd9acc 3 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=171234d48dc81cfdaffa149ae1f645583a8beb2e0970d9bce4ebbce8b9bd9acc 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.yeK 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.yeK 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.yeK 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 855247 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 855247 ']' 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.901 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PWy 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.FYh ]] 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FYh 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.YtB 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.oUw ]] 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.oUw 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ODM 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.H8R ]] 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.H8R 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.173 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.sa6 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.vEv ]] 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.vEv 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.yeK 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:06.174 04:09:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:21:08.746 Waiting for block devices as requested 00:21:08.746 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:09.004 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:09.004 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:09.004 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:09.004 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:09.263 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:09.263 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:09.263 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:09.521 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:09.521 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:09.521 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:09.521 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:09.779 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:09.779 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:09.779 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:10.037 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:10.037 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:21:11.939 No valid GPT data, bailing 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:11.939 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:21:12.197 00:21:12.197 Discovery Log Number of Records 2, Generation counter 2 00:21:12.197 =====Discovery Log Entry 0====== 00:21:12.197 trtype: rdma 00:21:12.197 adrfam: ipv4 00:21:12.197 subtype: current discovery subsystem 00:21:12.197 treq: not specified, sq flow control disable supported 00:21:12.197 portid: 1 00:21:12.197 trsvcid: 4420 00:21:12.197 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:12.197 traddr: 192.168.100.8 00:21:12.197 eflags: none 00:21:12.197 rdma_prtype: not specified 00:21:12.197 rdma_qptype: connected 00:21:12.197 rdma_cms: rdma-cm 00:21:12.197 rdma_pkey: 0x0000 00:21:12.197 =====Discovery Log Entry 1====== 00:21:12.197 trtype: rdma 00:21:12.197 adrfam: ipv4 00:21:12.197 subtype: nvme subsystem 00:21:12.197 treq: not specified, sq flow control disable supported 00:21:12.197 portid: 1 00:21:12.197 trsvcid: 4420 00:21:12.197 subnqn: nqn.2024-02.io.spdk:cnode0 00:21:12.197 traddr: 192.168.100.8 00:21:12.197 eflags: none 00:21:12.197 rdma_prtype: not specified 00:21:12.197 rdma_qptype: connected 00:21:12.197 rdma_cms: rdma-cm 00:21:12.197 rdma_pkey: 0x0000 00:21:12.197 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.198 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.456 nvme0n1 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.456 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.457 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.457 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.457 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.457 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.457 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.457 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:12.457 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:12.457 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:12.457 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:12.457 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:12.457 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.457 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.457 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.715 nvme0n1 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.715 04:09:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.974 nvme0n1 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.974 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.233 nvme0n1 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.233 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.492 nvme0n1 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.492 nvme0n1 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.492 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.751 04:09:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.751 nvme0n1 00:21:13.751 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.751 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.752 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.752 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.752 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.011 nvme0n1 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.011 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.270 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.271 nvme0n1 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.271 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.530 nvme0n1 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.530 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.790 04:09:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.790 nvme0n1 00:21:14.790 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.790 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.790 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.790 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.790 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.790 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.790 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.790 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.790 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.790 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:15.049 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.050 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.309 nvme0n1 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.309 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.568 nvme0n1 00:21:15.568 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.568 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.568 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.568 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.568 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.568 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.568 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.569 04:09:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.828 nvme0n1 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.828 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.087 nvme0n1 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.087 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.346 nvme0n1 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.346 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:16.605 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:16.606 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:16.606 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.606 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.606 04:09:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.864 nvme0n1 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:16.864 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.865 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.433 nvme0n1 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.433 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.693 nvme0n1 00:21:17.693 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.693 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.693 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.693 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.693 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.693 04:09:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.693 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.262 nvme0n1 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.262 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.522 nvme0n1 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:18.522 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.523 04:09:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.091 nvme0n1 00:21:19.091 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.091 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.091 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.091 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.091 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.091 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.091 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.091 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.091 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.091 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:19.350 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:19.351 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.351 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.351 04:09:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.919 nvme0n1 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.919 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.487 nvme0n1 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:20.487 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.488 04:09:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.055 nvme0n1 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.055 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.623 nvme0n1 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.623 04:09:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.882 nvme0n1 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.883 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.142 nvme0n1 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.142 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.401 nvme0n1 00:21:22.401 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.401 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.402 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.661 nvme0n1 00:21:22.661 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.661 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.661 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:22.661 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.661 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.661 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.661 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.662 04:09:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.662 nvme0n1 00:21:22.662 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.662 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.662 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:22.662 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.662 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.921 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.922 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.922 nvme0n1 00:21:22.922 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.922 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.922 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:22.922 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.922 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.181 nvme0n1 00:21:23.181 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.440 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.700 nvme0n1 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.700 04:09:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.960 nvme0n1 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.960 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.219 nvme0n1 00:21:24.219 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.219 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.219 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.219 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.219 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.219 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.219 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.219 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.220 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.478 nvme0n1 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.478 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:24.479 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:24.479 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:24.479 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:24.479 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:24.479 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.479 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.479 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.737 nvme0n1 00:21:24.737 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.737 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.737 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.737 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.737 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.737 04:09:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.737 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.738 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:24.738 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:24.738 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:24.738 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.738 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.738 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:24.738 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:24.738 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:24.738 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:24.738 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:24.738 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.738 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.738 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.997 nvme0n1 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.997 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.256 nvme0n1 00:21:25.256 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.256 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.256 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:25.256 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.256 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.256 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.515 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.515 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.515 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.515 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.515 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.515 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:25.515 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.516 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.775 nvme0n1 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.775 04:09:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.034 nvme0n1 00:21:26.034 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.034 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.034 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.034 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.034 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.034 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.034 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.034 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.034 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.034 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.034 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.034 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.034 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.035 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.609 nvme0n1 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.609 04:09:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.869 nvme0n1 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.869 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.128 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.388 nvme0n1 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.388 04:09:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.647 nvme0n1 00:21:27.647 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.647 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:27.647 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.647 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.647 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.647 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.906 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.474 nvme0n1 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:28.474 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.475 04:09:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.043 nvme0n1 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.043 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.612 nvme0n1 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.612 04:09:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.180 nvme0n1 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:30.180 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:30.181 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.181 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.181 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:30.181 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:30.181 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:30.181 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:30.181 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:30.181 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:30.181 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.181 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.749 nvme0n1 00:21:30.749 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.749 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.749 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.749 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.749 04:09:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.749 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.009 nvme0n1 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.009 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.268 nvme0n1 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:31.268 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:31.269 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:31.269 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:31.269 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:31.269 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.269 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.269 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.528 nvme0n1 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.528 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.788 nvme0n1 00:21:31.788 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.788 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.788 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:31.788 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.788 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.788 04:09:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.788 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.047 nvme0n1 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:32.047 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.048 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.307 nvme0n1 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.307 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.567 nvme0n1 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.567 04:09:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.834 nvme0n1 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:32.834 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:32.835 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:32.836 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:32.836 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:32.836 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:32.836 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.836 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.104 nvme0n1 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.104 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.363 nvme0n1 00:21:33.363 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.363 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:33.363 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.363 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.363 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:33.363 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.363 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.364 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.623 nvme0n1 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:33.623 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.624 04:09:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.883 nvme0n1 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.883 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.142 nvme0n1 00:21:34.142 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.142 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.142 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.142 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.142 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:34.142 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.402 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.662 nvme0n1 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.662 04:09:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.922 nvme0n1 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.922 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.181 nvme0n1 00:21:35.181 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.181 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:35.181 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.181 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.181 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:35.181 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.181 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.181 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.181 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.181 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.440 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.441 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.699 nvme0n1 00:21:35.699 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.699 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:35.699 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.699 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.699 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:35.699 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.699 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.700 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.700 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.700 04:09:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.700 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.266 nvme0n1 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.266 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.525 nvme0n1 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.525 04:09:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.091 nvme0n1 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY2MDdjNzY3MjJkN2Q1ZGQxYmE4Zjc2ODUyZmQwNWUVBaLy: 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: ]] 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGYzNDJjY2I2ODU5MzBlZjBkYTdiOTBhNzE5NDliYmJjYmQzM2QwOThjNDYyMWU3YTVlOWMxYzExZDlhNTFiMwmAsp0=: 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.091 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.658 nvme0n1 00:21:37.658 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.658 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:37.658 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:37.658 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.658 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.658 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.658 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.658 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.658 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.658 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.658 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.658 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:37.658 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.659 04:09:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.226 nvme0n1 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.226 04:09:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.793 nvme0n1 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDg0NDc1NmYzMWQ5OGIxODYyYjMxYWI1ODEyZmU2NzliZDYxNmRiMDU0MTk1M2Y5+7Z2HQ==: 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: ]] 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk4MmYxM2Q3NTU2NTdkMmZiMzFhY2VjYTNkMDEyYjm7dj/3: 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.793 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:38.794 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:38.794 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:38.794 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:38.794 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:38.794 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:38.794 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.794 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.361 nvme0n1 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcxMjM0ZDQ4ZGM4MWNmZGFmZmExNDlhZTFmNjQ1NTgzYThiZWIyZTA5NzBkOWJjZTRlYmJjZThiOWJkOWFjY4jN/dM=: 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.361 04:09:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.928 nvme0n1 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.928 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.187 request: 00:21:40.187 { 00:21:40.187 "name": "nvme0", 00:21:40.187 "trtype": "rdma", 00:21:40.187 "traddr": "192.168.100.8", 00:21:40.187 "adrfam": "ipv4", 00:21:40.187 "trsvcid": "4420", 00:21:40.187 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:40.187 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:40.187 "prchk_reftag": false, 00:21:40.187 "prchk_guard": false, 00:21:40.187 "hdgst": false, 00:21:40.187 "ddgst": false, 00:21:40.187 "allow_unrecognized_csi": false, 00:21:40.187 "method": "bdev_nvme_attach_controller", 00:21:40.187 "req_id": 1 00:21:40.187 } 00:21:40.187 Got JSON-RPC error response 00:21:40.187 response: 00:21:40.187 { 00:21:40.187 "code": -5, 00:21:40.187 "message": "Input/output error" 00:21:40.187 } 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.187 request: 00:21:40.187 { 00:21:40.187 "name": "nvme0", 00:21:40.187 "trtype": "rdma", 00:21:40.187 "traddr": "192.168.100.8", 00:21:40.187 "adrfam": "ipv4", 00:21:40.187 "trsvcid": "4420", 00:21:40.187 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:40.187 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:40.187 "prchk_reftag": false, 00:21:40.187 "prchk_guard": false, 00:21:40.187 "hdgst": false, 00:21:40.187 "ddgst": false, 00:21:40.187 "dhchap_key": "key2", 00:21:40.187 "allow_unrecognized_csi": false, 00:21:40.187 "method": "bdev_nvme_attach_controller", 00:21:40.187 "req_id": 1 00:21:40.187 } 00:21:40.187 Got JSON-RPC error response 00:21:40.187 response: 00:21:40.187 { 00:21:40.187 "code": -5, 00:21:40.187 "message": "Input/output error" 00:21:40.187 } 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.187 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.446 request: 00:21:40.446 { 00:21:40.446 "name": "nvme0", 00:21:40.446 "trtype": "rdma", 00:21:40.446 "traddr": "192.168.100.8", 00:21:40.446 "adrfam": "ipv4", 00:21:40.446 "trsvcid": "4420", 00:21:40.446 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:40.446 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:40.446 "prchk_reftag": false, 00:21:40.446 "prchk_guard": false, 00:21:40.446 "hdgst": false, 00:21:40.446 "ddgst": false, 00:21:40.446 "dhchap_key": "key1", 00:21:40.446 "dhchap_ctrlr_key": "ckey2", 00:21:40.446 "allow_unrecognized_csi": false, 00:21:40.446 "method": "bdev_nvme_attach_controller", 00:21:40.446 "req_id": 1 00:21:40.446 } 00:21:40.446 Got JSON-RPC error response 00:21:40.446 response: 00:21:40.446 { 00:21:40.446 "code": -5, 00:21:40.446 "message": "Input/output error" 00:21:40.446 } 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.446 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.705 nvme0n1 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.705 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.705 request: 00:21:40.705 { 00:21:40.705 "name": "nvme0", 00:21:40.705 "dhchap_key": "key1", 00:21:40.705 "dhchap_ctrlr_key": "ckey2", 00:21:40.705 "method": "bdev_nvme_set_keys", 00:21:40.705 "req_id": 1 00:21:40.705 } 00:21:40.705 Got JSON-RPC error response 00:21:40.705 response: 00:21:40.706 { 00:21:40.706 "code": -13, 00:21:40.706 "message": "Permission denied" 00:21:40.706 } 00:21:40.706 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:40.706 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:40.706 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.706 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.706 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.706 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.706 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:40.706 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.706 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.706 04:09:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.706 04:09:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:40.706 04:09:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:41.639 04:09:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:41.639 04:09:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:41.639 04:09:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.639 04:09:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.897 04:09:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.897 04:09:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:41.897 04:09:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:42.938 04:09:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:42.938 04:09:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.938 04:09:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.938 04:09:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.938 04:09:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.938 04:09:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:42.938 04:09:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY1M2UzYjdlZTQ5YjI1ZmFjMGVmNzU0ODRmMzIxYTI3MjdmZGI2MWRiZDBkYWZjRlAzjg==: 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: ]] 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJhYzQ1MjEwNzU5NTlhNWMwMWE4NGMxMDU4MjAxMWZmNGI5NDZjYzY3YzkxOTBkHIqwow==: 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.873 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.132 nvme0n1 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTFlODFkNzZmZDExMjk2ODYxZWYyNjU3NGUxNWM3NmXxv82S: 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: ]] 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGIwMjJhNWRiNjA5N2ZhNGIxMGNiYTRiOTY2MTg2YjDZzNlg: 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.132 request: 00:21:44.132 { 00:21:44.132 "name": "nvme0", 00:21:44.132 "dhchap_key": "key2", 00:21:44.132 "dhchap_ctrlr_key": "ckey1", 00:21:44.132 "method": "bdev_nvme_set_keys", 00:21:44.132 "req_id": 1 00:21:44.132 } 00:21:44.132 Got JSON-RPC error response 00:21:44.132 response: 00:21:44.132 { 00:21:44.132 "code": -13, 00:21:44.132 "message": "Permission denied" 00:21:44.132 } 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:44.132 04:09:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:45.066 04:09:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:45.066 04:09:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:45.066 04:09:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.066 04:09:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.066 04:09:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.066 04:09:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:45.066 04:09:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:46.440 04:09:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:46.440 04:09:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:46.440 04:09:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.440 04:09:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.440 04:09:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.440 04:09:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:46.440 04:09:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:47.375 rmmod nvme_rdma 00:21:47.375 rmmod nvme_fabrics 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 855247 ']' 00:21:47.375 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 855247 00:21:47.376 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 855247 ']' 00:21:47.376 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 855247 00:21:47.376 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:21:47.376 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.376 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 855247 00:21:47.376 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.376 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.376 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 855247' 00:21:47.376 killing process with pid 855247 00:21:47.376 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 855247 00:21:47.376 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 855247 00:21:47.634 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.634 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:47.634 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:47.634 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:47.634 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:47.634 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:47.634 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:21:47.634 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:47.634 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:47.634 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:47.634 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:47.634 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:47.634 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:21:47.634 04:09:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:21:50.165 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:21:50.165 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:21:53.450 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:21:54.829 04:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.PWy /tmp/spdk.key-null.YtB /tmp/spdk.key-sha256.ODM /tmp/spdk.key-sha384.sa6 /tmp/spdk.key-sha512.yeK /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:21:54.829 04:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:21:58.116 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:21:58.116 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:59.052 00:21:59.052 real 0m59.348s 00:21:59.052 user 0m46.673s 00:21:59.052 sys 0m14.400s 00:21:59.053 04:09:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.053 04:09:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.053 ************************************ 00:21:59.053 END TEST nvmf_auth_host 00:21:59.053 ************************************ 00:21:59.053 04:09:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:21:59.053 04:09:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:59.053 04:09:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:21:59.053 04:09:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:21:59.053 04:09:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:21:59.053 04:09:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:59.053 04:09:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.053 04:09:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.053 ************************************ 00:21:59.053 START TEST nvmf_bdevperf 00:21:59.053 ************************************ 00:21:59.053 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:21:59.053 * Looking for test storage... 00:21:59.312 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:59.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.312 --rc genhtml_branch_coverage=1 00:21:59.312 --rc genhtml_function_coverage=1 00:21:59.312 --rc genhtml_legend=1 00:21:59.312 --rc geninfo_all_blocks=1 00:21:59.312 --rc geninfo_unexecuted_blocks=1 00:21:59.312 00:21:59.312 ' 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:59.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.312 --rc genhtml_branch_coverage=1 00:21:59.312 --rc genhtml_function_coverage=1 00:21:59.312 --rc genhtml_legend=1 00:21:59.312 --rc geninfo_all_blocks=1 00:21:59.312 --rc geninfo_unexecuted_blocks=1 00:21:59.312 00:21:59.312 ' 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:59.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.312 --rc genhtml_branch_coverage=1 00:21:59.312 --rc genhtml_function_coverage=1 00:21:59.312 --rc genhtml_legend=1 00:21:59.312 --rc geninfo_all_blocks=1 00:21:59.312 --rc geninfo_unexecuted_blocks=1 00:21:59.312 00:21:59.312 ' 00:21:59.312 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:59.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.312 --rc genhtml_branch_coverage=1 00:21:59.312 --rc genhtml_function_coverage=1 00:21:59.312 --rc genhtml_legend=1 00:21:59.312 --rc geninfo_all_blocks=1 00:21:59.312 --rc geninfo_unexecuted_blocks=1 00:21:59.313 00:21:59.313 ' 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.313 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.313 04:09:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:05.883 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:05.883 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:05.883 Found net devices under 0000:18:00.0: mlx_0_0 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:05.883 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:05.884 Found net devices under 0000:18:00.1: mlx_0_1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:05.884 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:05.884 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:22:05.884 altname enp24s0f0np0 00:22:05.884 altname ens785f0np0 00:22:05.884 inet 192.168.100.8/24 scope global mlx_0_0 00:22:05.884 valid_lft forever preferred_lft forever 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:05.884 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:05.884 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:22:05.884 altname enp24s0f1np1 00:22:05.884 altname ens785f1np1 00:22:05.884 inet 192.168.100.9/24 scope global mlx_0_1 00:22:05.884 valid_lft forever preferred_lft forever 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:05.884 192.168.100.9' 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:05.884 192.168.100.9' 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:05.884 192.168.100.9' 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.884 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=871229 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 871229 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 871229 ']' 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.885 [2024-12-10 04:09:59.469084] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:05.885 [2024-12-10 04:09:59.469125] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.885 [2024-12-10 04:09:59.526934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:05.885 [2024-12-10 04:09:59.565550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.885 [2024-12-10 04:09:59.565588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.885 [2024-12-10 04:09:59.565595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.885 [2024-12-10 04:09:59.565601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.885 [2024-12-10 04:09:59.565605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.885 [2024-12-10 04:09:59.566806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.885 [2024-12-10 04:09:59.566892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.885 [2024-12-10 04:09:59.566894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.885 [2024-12-10 04:09:59.716274] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ed0800/0x1ed4cf0) succeed. 00:22:05.885 [2024-12-10 04:09:59.724404] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ed1df0/0x1f16390) succeed. 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.885 Malloc0 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:05.885 [2024-12-10 04:09:59.868403] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.885 { 00:22:05.885 "params": { 00:22:05.885 "name": "Nvme$subsystem", 00:22:05.885 "trtype": "$TEST_TRANSPORT", 00:22:05.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.885 "adrfam": "ipv4", 00:22:05.885 "trsvcid": "$NVMF_PORT", 00:22:05.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.885 "hdgst": ${hdgst:-false}, 00:22:05.885 "ddgst": ${ddgst:-false} 00:22:05.885 }, 00:22:05.885 "method": "bdev_nvme_attach_controller" 00:22:05.885 } 00:22:05.885 EOF 00:22:05.885 )") 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:22:05.885 04:09:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:05.885 "params": { 00:22:05.885 "name": "Nvme1", 00:22:05.885 "trtype": "rdma", 00:22:05.885 "traddr": "192.168.100.8", 00:22:05.885 "adrfam": "ipv4", 00:22:05.885 "trsvcid": "4420", 00:22:05.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:05.885 "hdgst": false, 00:22:05.885 "ddgst": false 00:22:05.885 }, 00:22:05.885 "method": "bdev_nvme_attach_controller" 00:22:05.885 }' 00:22:05.885 [2024-12-10 04:09:59.917765] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:05.885 [2024-12-10 04:09:59.917812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871356 ] 00:22:05.885 [2024-12-10 04:09:59.976053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.885 [2024-12-10 04:10:00.018905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.885 Running I/O for 1 seconds... 00:22:07.081 19200.00 IOPS, 75.00 MiB/s 00:22:07.081 Latency(us) 00:22:07.081 [2024-12-10T03:10:01.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.081 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:07.081 Verification LBA range: start 0x0 length 0x4000 00:22:07.081 Nvme1n1 : 1.01 19218.70 75.07 0.00 0.00 6626.79 2500.08 11505.21 00:22:07.081 [2024-12-10T03:10:01.470Z] =================================================================================================================== 00:22:07.081 [2024-12-10T03:10:01.470Z] Total : 19218.70 75.07 0.00 0.00 6626.79 2500.08 11505.21 00:22:07.081 04:10:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=871639 00:22:07.081 04:10:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:22:07.081 04:10:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:22:07.081 04:10:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:22:07.081 04:10:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:22:07.081 04:10:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:22:07.081 04:10:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.081 04:10:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.081 { 00:22:07.081 "params": { 00:22:07.081 "name": "Nvme$subsystem", 00:22:07.081 "trtype": "$TEST_TRANSPORT", 00:22:07.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.081 "adrfam": "ipv4", 00:22:07.081 "trsvcid": "$NVMF_PORT", 00:22:07.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.081 "hdgst": ${hdgst:-false}, 00:22:07.081 "ddgst": ${ddgst:-false} 00:22:07.081 }, 00:22:07.081 "method": "bdev_nvme_attach_controller" 00:22:07.081 } 00:22:07.081 EOF 00:22:07.081 )") 00:22:07.081 04:10:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:22:07.081 04:10:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:22:07.081 04:10:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:22:07.081 04:10:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:07.081 "params": { 00:22:07.081 "name": "Nvme1", 00:22:07.081 "trtype": "rdma", 00:22:07.081 "traddr": "192.168.100.8", 00:22:07.081 "adrfam": "ipv4", 00:22:07.081 "trsvcid": "4420", 00:22:07.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:07.081 "hdgst": false, 00:22:07.081 "ddgst": false 00:22:07.081 }, 00:22:07.081 "method": "bdev_nvme_attach_controller" 00:22:07.081 }' 00:22:07.081 [2024-12-10 04:10:01.431599] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:07.081 [2024-12-10 04:10:01.431649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871639 ] 00:22:07.340 [2024-12-10 04:10:01.489104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.340 [2024-12-10 04:10:01.526436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.340 Running I/O for 15 seconds... 00:22:09.654 19110.00 IOPS, 74.65 MiB/s [2024-12-10T03:10:04.611Z] 19200.00 IOPS, 75.00 MiB/s [2024-12-10T03:10:04.611Z] 04:10:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 871229 00:22:10.222 04:10:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:22:11.051 17194.67 IOPS, 67.17 MiB/s [2024-12-10T03:10:05.440Z] [2024-12-10 04:10:05.423143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.051 [2024-12-10 04:10:05.423499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.051 [2024-12-10 04:10:05.423505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.423991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.423997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.424004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.052 [2024-12-10 04:10:05.424010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.052 [2024-12-10 04:10:05.424019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.053 [2024-12-10 04:10:05.424543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.053 [2024-12-10 04:10:05.424549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:11.054 [2024-12-10 04:10:05.424808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x180f00 00:22:11.054 [2024-12-10 04:10:05.424823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x180f00 00:22:11.054 [2024-12-10 04:10:05.424836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x180f00 00:22:11.054 [2024-12-10 04:10:05.424849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x180f00 00:22:11.054 [2024-12-10 04:10:05.424862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.424869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x180f00 00:22:11.054 [2024-12-10 04:10:05.424874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:60413000 sqhd:7210 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.426679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:11.054 [2024-12-10 04:10:05.426691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:11.054 [2024-12-10 04:10:05.426696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19496 len:8 PRP1 0x0 PRP2 0x0 00:22:11.054 [2024-12-10 04:10:05.426702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.054 [2024-12-10 04:10:05.429272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:11.313 [2024-12-10 04:10:05.442599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:22:11.313 [2024-12-10 04:10:05.445754] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:11.313 [2024-12-10 04:10:05.445775] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:11.313 [2024-12-10 04:10:05.445783] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:22:12.140 12896.00 IOPS, 50.38 MiB/s [2024-12-10T03:10:06.529Z] [2024-12-10 04:10:06.449617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:22:12.140 [2024-12-10 04:10:06.449635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:12.140 [2024-12-10 04:10:06.449813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:12.140 [2024-12-10 04:10:06.449821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:12.140 [2024-12-10 04:10:06.449828] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:22:12.140 [2024-12-10 04:10:06.449837] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:12.140 [2024-12-10 04:10:06.453944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:12.140 [2024-12-10 04:10:06.457252] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:12.140 [2024-12-10 04:10:06.457280] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:12.140 [2024-12-10 04:10:06.457286] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:22:13.336 10316.80 IOPS, 40.30 MiB/s [2024-12-10T03:10:07.725Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 871229 Killed "${NVMF_APP[@]}" "$@" 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=872740 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 872740 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 872740 ']' 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:13.336 [2024-12-10 04:10:07.447749] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:13.336 [2024-12-10 04:10:07.447790] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.336 [2024-12-10 04:10:07.461087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:22:13.336 [2024-12-10 04:10:07.461108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:13.336 [2024-12-10 04:10:07.461278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:13.336 [2024-12-10 04:10:07.461287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:13.336 [2024-12-10 04:10:07.461295] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:22:13.336 [2024-12-10 04:10:07.461304] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:13.336 [2024-12-10 04:10:07.465803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:13.336 [2024-12-10 04:10:07.468220] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:13.336 [2024-12-10 04:10:07.468237] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:13.336 [2024-12-10 04:10:07.468244] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:22:13.336 [2024-12-10 04:10:07.506451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:13.336 [2024-12-10 04:10:07.545460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.336 [2024-12-10 04:10:07.545493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.336 [2024-12-10 04:10:07.545503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.336 [2024-12-10 04:10:07.545509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.336 [2024-12-10 04:10:07.545513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.336 [2024-12-10 04:10:07.546649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.336 [2024-12-10 04:10:07.546751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:13.336 [2024-12-10 04:10:07.546753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.336 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:13.336 [2024-12-10 04:10:07.699222] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x246a800/0x246ecf0) succeed. 00:22:13.336 [2024-12-10 04:10:07.707963] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x246bdf0/0x24b0390) succeed. 00:22:13.595 8597.33 IOPS, 33.58 MiB/s [2024-12-10T03:10:07.984Z] 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:13.595 Malloc0 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.595 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:13.595 [2024-12-10 04:10:07.845437] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:13.596 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.596 04:10:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 871639 00:22:14.163 [2024-12-10 04:10:08.472029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:22:14.163 [2024-12-10 04:10:08.472056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:14.163 [2024-12-10 04:10:08.472219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:14.163 [2024-12-10 04:10:08.472228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:14.163 [2024-12-10 04:10:08.472235] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:22:14.163 [2024-12-10 04:10:08.472244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:14.163 [2024-12-10 04:10:08.481234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:14.163 [2024-12-10 04:10:08.525641] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:22:15.359 7897.43 IOPS, 30.85 MiB/s [2024-12-10T03:10:11.126Z] 9307.00 IOPS, 36.36 MiB/s [2024-12-10T03:10:12.063Z] 10394.00 IOPS, 40.60 MiB/s [2024-12-10T03:10:12.999Z] 11272.80 IOPS, 44.03 MiB/s [2024-12-10T03:10:13.937Z] 11992.00 IOPS, 46.84 MiB/s [2024-12-10T03:10:14.874Z] 12589.33 IOPS, 49.18 MiB/s [2024-12-10T03:10:15.810Z] 13094.77 IOPS, 51.15 MiB/s [2024-12-10T03:10:17.188Z] 13528.57 IOPS, 52.85 MiB/s [2024-12-10T03:10:17.188Z] 13905.07 IOPS, 54.32 MiB/s 00:22:22.799 Latency(us) 00:22:22.799 [2024-12-10T03:10:17.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.799 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:22.799 Verification LBA range: start 0x0 length 0x4000 00:22:22.799 Nvme1n1 : 15.00 13904.44 54.31 11158.70 0.00 5088.73 351.95 1037701.88 00:22:22.799 [2024-12-10T03:10:17.188Z] =================================================================================================================== 00:22:22.799 [2024-12-10T03:10:17.188Z] Total : 13904.44 54.31 11158.70 0.00 5088.73 351.95 1037701.88 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:22.799 rmmod nvme_rdma 00:22:22.799 rmmod nvme_fabrics 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 872740 ']' 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 872740 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 872740 ']' 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 872740 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.799 04:10:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 872740 00:22:22.799 04:10:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:22.799 04:10:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:22.799 04:10:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 872740' 00:22:22.799 killing process with pid 872740 00:22:22.799 04:10:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 872740 00:22:22.799 04:10:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 872740 00:22:23.059 04:10:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:23.059 04:10:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:23.059 00:22:23.059 real 0m23.912s 00:22:23.059 user 1m1.895s 00:22:23.059 sys 0m5.446s 00:22:23.059 04:10:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:23.059 04:10:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:23.059 ************************************ 00:22:23.059 END TEST nvmf_bdevperf 00:22:23.059 ************************************ 00:22:23.059 04:10:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:22:23.059 04:10:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:23.059 04:10:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:23.059 04:10:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.059 ************************************ 00:22:23.059 START TEST nvmf_target_disconnect 00:22:23.059 ************************************ 00:22:23.059 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:22:23.059 * Looking for test storage... 00:22:23.059 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:23.059 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:23.059 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:22:23.059 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.318 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:23.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.319 --rc genhtml_branch_coverage=1 00:22:23.319 --rc genhtml_function_coverage=1 00:22:23.319 --rc genhtml_legend=1 00:22:23.319 --rc geninfo_all_blocks=1 00:22:23.319 --rc geninfo_unexecuted_blocks=1 00:22:23.319 00:22:23.319 ' 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:23.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.319 --rc genhtml_branch_coverage=1 00:22:23.319 --rc genhtml_function_coverage=1 00:22:23.319 --rc genhtml_legend=1 00:22:23.319 --rc geninfo_all_blocks=1 00:22:23.319 --rc geninfo_unexecuted_blocks=1 00:22:23.319 00:22:23.319 ' 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:23.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.319 --rc genhtml_branch_coverage=1 00:22:23.319 --rc genhtml_function_coverage=1 00:22:23.319 --rc genhtml_legend=1 00:22:23.319 --rc geninfo_all_blocks=1 00:22:23.319 --rc geninfo_unexecuted_blocks=1 00:22:23.319 00:22:23.319 ' 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:23.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.319 --rc genhtml_branch_coverage=1 00:22:23.319 --rc genhtml_function_coverage=1 00:22:23.319 --rc genhtml_legend=1 00:22:23.319 --rc geninfo_all_blocks=1 00:22:23.319 --rc geninfo_unexecuted_blocks=1 00:22:23.319 00:22:23.319 ' 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:23.319 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.319 04:10:17 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:28.592 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:28.592 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:28.592 Found net devices under 0000:18:00.0: mlx_0_0 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:28.592 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:28.593 Found net devices under 0000:18:00.1: mlx_0_1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:28.593 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:28.593 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:22:28.593 altname enp24s0f0np0 00:22:28.593 altname ens785f0np0 00:22:28.593 inet 192.168.100.8/24 scope global mlx_0_0 00:22:28.593 valid_lft forever preferred_lft forever 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:28.593 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:28.593 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:22:28.593 altname enp24s0f1np1 00:22:28.593 altname ens785f1np1 00:22:28.593 inet 192.168.100.9/24 scope global mlx_0_1 00:22:28.593 valid_lft forever preferred_lft forever 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:28.593 192.168.100.9' 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:28.593 192.168.100.9' 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:28.593 192.168.100.9' 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:28.593 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:28.594 ************************************ 00:22:28.594 START TEST nvmf_target_disconnect_tc1 00:22:28.594 ************************************ 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:22:28.594 04:10:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:28.852 [2024-12-10 04:10:23.049132] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:28.852 [2024-12-10 04:10:23.049160] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:28.852 [2024-12-10 04:10:23.049167] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:22:29.783 [2024-12-10 04:10:24.052976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:22:29.783 [2024-12-10 04:10:24.053034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:22:29.783 [2024-12-10 04:10:24.053060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:22:29.783 [2024-12-10 04:10:24.053113] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:29.783 [2024-12-10 04:10:24.053142] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:22:29.783 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:22:29.783 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:22:29.783 Initializing NVMe Controllers 00:22:29.783 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:22:29.783 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.783 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.783 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.783 00:22:29.783 real 0m1.118s 00:22:29.783 user 0m0.943s 00:22:29.783 sys 0m0.163s 00:22:29.783 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:29.783 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:29.783 ************************************ 00:22:29.783 END TEST nvmf_target_disconnect_tc1 00:22:29.783 ************************************ 00:22:29.783 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:22:29.783 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:29.783 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.783 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:29.783 ************************************ 00:22:29.783 START TEST nvmf_target_disconnect_tc2 00:22:29.783 ************************************ 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=877948 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 877948 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 877948 ']' 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.784 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.042 [2024-12-10 04:10:24.186178] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:30.042 [2024-12-10 04:10:24.186217] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.042 [2024-12-10 04:10:24.259598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.042 [2024-12-10 04:10:24.296629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.042 [2024-12-10 04:10:24.296665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.042 [2024-12-10 04:10:24.296672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.042 [2024-12-10 04:10:24.296678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.042 [2024-12-10 04:10:24.296684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.042 [2024-12-10 04:10:24.298114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:30.042 [2024-12-10 04:10:24.298221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:30.042 [2024-12-10 04:10:24.298328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:30.042 [2024-12-10 04:10:24.298329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:22:30.042 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.042 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:30.042 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:30.042 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:30.042 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.301 Malloc0 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.301 [2024-12-10 04:10:24.493144] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1348170/0x1353e50) succeed. 00:22:30.301 [2024-12-10 04:10:24.501554] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1349800/0x13954f0) succeed. 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.301 [2024-12-10 04:10:24.630418] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=877980 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:22:30.301 04:10:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:32.837 04:10:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 877948 00:22:32.837 04:10:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:22:33.773 Write completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Write completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Write completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Read completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Write completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Write completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Write completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Write completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Read completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Read completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Write completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Read completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Read completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Write completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Read completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Write completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Read completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Write completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Read completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Read completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Read completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.773 Write completed with error (sct=0, sc=8) 00:22:33.773 starting I/O failed 00:22:33.774 Write completed with error (sct=0, sc=8) 00:22:33.774 starting I/O failed 00:22:33.774 Read completed with error (sct=0, sc=8) 00:22:33.774 starting I/O failed 00:22:33.774 Write completed with error (sct=0, sc=8) 00:22:33.774 starting I/O failed 00:22:33.774 Write completed with error (sct=0, sc=8) 00:22:33.774 starting I/O failed 00:22:33.774 Write completed with error (sct=0, sc=8) 00:22:33.774 starting I/O failed 00:22:33.774 Write completed with error (sct=0, sc=8) 00:22:33.774 starting I/O failed 00:22:33.774 Write completed with error (sct=0, sc=8) 00:22:33.774 starting I/O failed 00:22:33.774 Write completed with error (sct=0, sc=8) 00:22:33.774 starting I/O failed 00:22:33.774 Write completed with error (sct=0, sc=8) 00:22:33.774 starting I/O failed 00:22:33.774 Read completed with error (sct=0, sc=8) 00:22:33.774 starting I/O failed 00:22:33.774 [2024-12-10 04:10:27.807014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:34.342 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 877948 Killed "${NVMF_APP[@]}" "$@" 00:22:34.342 04:10:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:22:34.342 04:10:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:34.342 04:10:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:34.342 04:10:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.342 04:10:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:34.342 04:10:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=878767 00:22:34.342 04:10:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 878767 00:22:34.342 04:10:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:34.342 04:10:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 878767 ']' 00:22:34.342 04:10:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.342 04:10:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.342 04:10:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.342 04:10:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.342 04:10:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:34.342 [2024-12-10 04:10:28.705658] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:34.342 [2024-12-10 04:10:28.705703] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.601 [2024-12-10 04:10:28.782737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Read completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 Write completed with error (sct=0, sc=8) 00:22:34.601 starting I/O failed 00:22:34.601 [2024-12-10 04:10:28.811919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:34.601 [2024-12-10 04:10:28.821213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.601 [2024-12-10 04:10:28.821238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.601 [2024-12-10 04:10:28.821244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.601 [2024-12-10 04:10:28.821250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.601 [2024-12-10 04:10:28.821255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.601 [2024-12-10 04:10:28.822714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:34.601 [2024-12-10 04:10:28.822820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:34.601 [2024-12-10 04:10:28.822924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:34.602 [2024-12-10 04:10:28.822925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:22:35.169 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.169 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:35.169 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.169 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:35.169 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.428 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.428 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.429 Malloc0 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.429 [2024-12-10 04:10:29.626712] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17e3170/0x17eee50) succeed. 00:22:35.429 [2024-12-10 04:10:29.635124] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17e4800/0x18304f0) succeed. 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.429 [2024-12-10 04:10:29.769512] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.429 04:10:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 877980 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Read completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Read completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Read completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Read completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Read completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Read completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Read completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Read completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Read completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Read completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Read completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Read completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 Write completed with error (sct=0, sc=8) 00:22:35.689 starting I/O failed 00:22:35.689 [2024-12-10 04:10:29.816732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.689 [2024-12-10 04:10:29.827412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.689 [2024-12-10 04:10:29.827462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.689 [2024-12-10 04:10:29.827480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.689 [2024-12-10 04:10:29.827488] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.689 [2024-12-10 04:10:29.827494] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.689 [2024-12-10 04:10:29.837618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.689 qpair failed and we were unable to recover it. 00:22:35.689 [2024-12-10 04:10:29.847481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.689 [2024-12-10 04:10:29.847525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.689 [2024-12-10 04:10:29.847541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.689 [2024-12-10 04:10:29.847548] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.689 [2024-12-10 04:10:29.847554] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.689 [2024-12-10 04:10:29.857777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.689 qpair failed and we were unable to recover it. 00:22:35.689 [2024-12-10 04:10:29.867447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.689 [2024-12-10 04:10:29.867486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.689 [2024-12-10 04:10:29.867501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.689 [2024-12-10 04:10:29.867509] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.689 [2024-12-10 04:10:29.867515] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.689 [2024-12-10 04:10:29.877781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.689 qpair failed and we were unable to recover it. 00:22:35.689 [2024-12-10 04:10:29.887508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.689 [2024-12-10 04:10:29.887549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.689 [2024-12-10 04:10:29.887564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.689 [2024-12-10 04:10:29.887570] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.689 [2024-12-10 04:10:29.887576] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.689 [2024-12-10 04:10:29.897776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.689 qpair failed and we were unable to recover it. 00:22:35.689 [2024-12-10 04:10:29.907567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.689 [2024-12-10 04:10:29.907606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.689 [2024-12-10 04:10:29.907621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.689 [2024-12-10 04:10:29.907628] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.689 [2024-12-10 04:10:29.907633] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.689 [2024-12-10 04:10:29.917909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.689 qpair failed and we were unable to recover it. 00:22:35.689 [2024-12-10 04:10:29.927606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.689 [2024-12-10 04:10:29.927644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.689 [2024-12-10 04:10:29.927660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.689 [2024-12-10 04:10:29.927666] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.689 [2024-12-10 04:10:29.927672] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.689 [2024-12-10 04:10:29.938043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.689 qpair failed and we were unable to recover it. 00:22:35.689 [2024-12-10 04:10:29.947608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.689 [2024-12-10 04:10:29.947649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.689 [2024-12-10 04:10:29.947665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.689 [2024-12-10 04:10:29.947671] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.689 [2024-12-10 04:10:29.947677] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.689 [2024-12-10 04:10:29.957964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.689 qpair failed and we were unable to recover it. 00:22:35.689 [2024-12-10 04:10:29.967744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.690 [2024-12-10 04:10:29.967784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.690 [2024-12-10 04:10:29.967800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.690 [2024-12-10 04:10:29.967807] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.690 [2024-12-10 04:10:29.967813] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.690 [2024-12-10 04:10:29.977936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.690 qpair failed and we were unable to recover it. 00:22:35.690 [2024-12-10 04:10:29.987825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.690 [2024-12-10 04:10:29.987860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.690 [2024-12-10 04:10:29.987875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.690 [2024-12-10 04:10:29.987882] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.690 [2024-12-10 04:10:29.987887] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.690 [2024-12-10 04:10:29.998151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.690 qpair failed and we were unable to recover it. 00:22:35.690 [2024-12-10 04:10:30.008174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.690 [2024-12-10 04:10:30.008230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.690 [2024-12-10 04:10:30.008245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.690 [2024-12-10 04:10:30.008252] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.690 [2024-12-10 04:10:30.008258] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.690 [2024-12-10 04:10:30.018200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.690 qpair failed and we were unable to recover it. 00:22:35.690 [2024-12-10 04:10:30.028001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.690 [2024-12-10 04:10:30.028196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.690 [2024-12-10 04:10:30.028214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.690 [2024-12-10 04:10:30.028225] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.690 [2024-12-10 04:10:30.028232] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.690 [2024-12-10 04:10:30.038338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.690 qpair failed and we were unable to recover it. 00:22:35.690 [2024-12-10 04:10:30.047956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.690 [2024-12-10 04:10:30.047997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.690 [2024-12-10 04:10:30.048012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.690 [2024-12-10 04:10:30.048020] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.690 [2024-12-10 04:10:30.048025] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.690 [2024-12-10 04:10:30.058215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.690 qpair failed and we were unable to recover it. 00:22:35.690 [2024-12-10 04:10:30.068072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.690 [2024-12-10 04:10:30.068114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.690 [2024-12-10 04:10:30.068129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.690 [2024-12-10 04:10:30.068136] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.690 [2024-12-10 04:10:30.068141] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.950 [2024-12-10 04:10:30.078154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.950 qpair failed and we were unable to recover it. 00:22:35.950 [2024-12-10 04:10:30.088073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.950 [2024-12-10 04:10:30.088110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.950 [2024-12-10 04:10:30.088125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.950 [2024-12-10 04:10:30.088132] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.950 [2024-12-10 04:10:30.088137] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.950 [2024-12-10 04:10:30.098439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.950 qpair failed and we were unable to recover it. 00:22:35.950 [2024-12-10 04:10:30.108098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.950 [2024-12-10 04:10:30.108136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.950 [2024-12-10 04:10:30.108151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.950 [2024-12-10 04:10:30.108158] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.950 [2024-12-10 04:10:30.108168] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.950 [2024-12-10 04:10:30.118498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.950 qpair failed and we were unable to recover it. 00:22:35.950 [2024-12-10 04:10:30.128107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.950 [2024-12-10 04:10:30.128145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.950 [2024-12-10 04:10:30.128160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.950 [2024-12-10 04:10:30.128167] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.950 [2024-12-10 04:10:30.128174] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.950 [2024-12-10 04:10:30.138526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.950 qpair failed and we were unable to recover it. 00:22:35.950 [2024-12-10 04:10:30.148175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.950 [2024-12-10 04:10:30.148220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.950 [2024-12-10 04:10:30.148235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.950 [2024-12-10 04:10:30.148242] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.950 [2024-12-10 04:10:30.148247] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.950 [2024-12-10 04:10:30.158498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.950 qpair failed and we were unable to recover it. 00:22:35.950 [2024-12-10 04:10:30.168249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.950 [2024-12-10 04:10:30.168291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.950 [2024-12-10 04:10:30.168307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.950 [2024-12-10 04:10:30.168314] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.950 [2024-12-10 04:10:30.168319] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.950 [2024-12-10 04:10:30.178577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.950 qpair failed and we were unable to recover it. 00:22:35.950 [2024-12-10 04:10:30.188280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.950 [2024-12-10 04:10:30.188317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.950 [2024-12-10 04:10:30.188332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.950 [2024-12-10 04:10:30.188339] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.950 [2024-12-10 04:10:30.188345] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.950 [2024-12-10 04:10:30.198637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.950 qpair failed and we were unable to recover it. 00:22:35.950 [2024-12-10 04:10:30.208419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.950 [2024-12-10 04:10:30.208457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.950 [2024-12-10 04:10:30.208473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.950 [2024-12-10 04:10:30.208480] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.950 [2024-12-10 04:10:30.208486] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.950 [2024-12-10 04:10:30.218718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.950 qpair failed and we were unable to recover it. 00:22:35.950 [2024-12-10 04:10:30.228414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.950 [2024-12-10 04:10:30.228455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.950 [2024-12-10 04:10:30.228470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.950 [2024-12-10 04:10:30.228477] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.950 [2024-12-10 04:10:30.228483] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.950 [2024-12-10 04:10:30.238867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.950 qpair failed and we were unable to recover it. 00:22:35.950 [2024-12-10 04:10:30.248585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.950 [2024-12-10 04:10:30.248622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.950 [2024-12-10 04:10:30.248638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.950 [2024-12-10 04:10:30.248645] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.950 [2024-12-10 04:10:30.248650] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.950 [2024-12-10 04:10:30.258855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.950 qpair failed and we were unable to recover it. 00:22:35.950 [2024-12-10 04:10:30.268527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.950 [2024-12-10 04:10:30.268565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.950 [2024-12-10 04:10:30.268581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.950 [2024-12-10 04:10:30.268588] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.950 [2024-12-10 04:10:30.268594] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.950 [2024-12-10 04:10:30.278933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.950 qpair failed and we were unable to recover it. 00:22:35.950 [2024-12-10 04:10:30.288500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.950 [2024-12-10 04:10:30.288544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.950 [2024-12-10 04:10:30.288563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.950 [2024-12-10 04:10:30.288569] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.950 [2024-12-10 04:10:30.288575] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.950 [2024-12-10 04:10:30.299033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.950 qpair failed and we were unable to recover it. 00:22:35.950 [2024-12-10 04:10:30.308605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.951 [2024-12-10 04:10:30.308641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.951 [2024-12-10 04:10:30.308657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.951 [2024-12-10 04:10:30.308663] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.951 [2024-12-10 04:10:30.308669] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:35.951 [2024-12-10 04:10:30.319068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.951 qpair failed and we were unable to recover it. 00:22:35.951 [2024-12-10 04:10:30.328610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.951 [2024-12-10 04:10:30.328650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.951 [2024-12-10 04:10:30.328665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.951 [2024-12-10 04:10:30.328672] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.951 [2024-12-10 04:10:30.328677] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.210 [2024-12-10 04:10:30.339085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.210 qpair failed and we were unable to recover it. 00:22:36.210 [2024-12-10 04:10:30.348776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.210 [2024-12-10 04:10:30.348815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.210 [2024-12-10 04:10:30.348831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.210 [2024-12-10 04:10:30.348838] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.210 [2024-12-10 04:10:30.348843] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.210 [2024-12-10 04:10:30.359116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.210 qpair failed and we were unable to recover it. 00:22:36.210 [2024-12-10 04:10:30.368871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.210 [2024-12-10 04:10:30.368908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.210 [2024-12-10 04:10:30.368923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.210 [2024-12-10 04:10:30.368933] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.210 [2024-12-10 04:10:30.368939] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.210 [2024-12-10 04:10:30.379133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.210 qpair failed and we were unable to recover it. 00:22:36.210 [2024-12-10 04:10:30.388797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.210 [2024-12-10 04:10:30.388832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.210 [2024-12-10 04:10:30.388847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.210 [2024-12-10 04:10:30.388854] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.210 [2024-12-10 04:10:30.388860] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.211 [2024-12-10 04:10:30.399282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.211 qpair failed and we were unable to recover it. 00:22:36.211 [2024-12-10 04:10:30.409000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.211 [2024-12-10 04:10:30.409046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.211 [2024-12-10 04:10:30.409062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.211 [2024-12-10 04:10:30.409069] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.211 [2024-12-10 04:10:30.409075] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.211 [2024-12-10 04:10:30.419188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.211 qpair failed and we were unable to recover it. 00:22:36.211 [2024-12-10 04:10:30.429032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.211 [2024-12-10 04:10:30.429068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.211 [2024-12-10 04:10:30.429084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.211 [2024-12-10 04:10:30.429092] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.211 [2024-12-10 04:10:30.429098] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.211 [2024-12-10 04:10:30.439198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.211 qpair failed and we were unable to recover it. 00:22:36.211 [2024-12-10 04:10:30.449088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.211 [2024-12-10 04:10:30.449136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.211 [2024-12-10 04:10:30.449152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.211 [2024-12-10 04:10:30.449159] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.211 [2024-12-10 04:10:30.449164] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.211 [2024-12-10 04:10:30.459184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.211 qpair failed and we were unable to recover it. 00:22:36.211 [2024-12-10 04:10:30.469102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.211 [2024-12-10 04:10:30.469147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.211 [2024-12-10 04:10:30.469162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.211 [2024-12-10 04:10:30.469168] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.211 [2024-12-10 04:10:30.469174] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.211 [2024-12-10 04:10:30.479497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.211 qpair failed and we were unable to recover it. 00:22:36.211 [2024-12-10 04:10:30.489152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.211 [2024-12-10 04:10:30.489186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.211 [2024-12-10 04:10:30.489201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.211 [2024-12-10 04:10:30.489208] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.211 [2024-12-10 04:10:30.489214] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.211 [2024-12-10 04:10:30.499408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.211 qpair failed and we were unable to recover it. 00:22:36.211 [2024-12-10 04:10:30.509307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.211 [2024-12-10 04:10:30.509340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.211 [2024-12-10 04:10:30.509355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.211 [2024-12-10 04:10:30.509362] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.211 [2024-12-10 04:10:30.509367] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.211 [2024-12-10 04:10:30.519653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.211 qpair failed and we were unable to recover it. 00:22:36.211 [2024-12-10 04:10:30.529322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.211 [2024-12-10 04:10:30.529358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.211 [2024-12-10 04:10:30.529373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.211 [2024-12-10 04:10:30.529380] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.211 [2024-12-10 04:10:30.529386] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.211 [2024-12-10 04:10:30.539661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.211 qpair failed and we were unable to recover it. 00:22:36.211 [2024-12-10 04:10:30.549418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.211 [2024-12-10 04:10:30.549457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.211 [2024-12-10 04:10:30.549472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.211 [2024-12-10 04:10:30.549479] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.211 [2024-12-10 04:10:30.549484] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.211 [2024-12-10 04:10:30.559974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.211 qpair failed and we were unable to recover it. 00:22:36.211 [2024-12-10 04:10:30.569412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.211 [2024-12-10 04:10:30.569446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.211 [2024-12-10 04:10:30.569462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.211 [2024-12-10 04:10:30.569469] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.211 [2024-12-10 04:10:30.569475] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.211 [2024-12-10 04:10:30.579725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.211 qpair failed and we were unable to recover it. 00:22:36.211 [2024-12-10 04:10:30.589503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.211 [2024-12-10 04:10:30.589535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.211 [2024-12-10 04:10:30.589551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.211 [2024-12-10 04:10:30.589558] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.211 [2024-12-10 04:10:30.589564] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.502 [2024-12-10 04:10:30.599730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.502 qpair failed and we were unable to recover it. 00:22:36.502 [2024-12-10 04:10:30.609626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.502 [2024-12-10 04:10:30.609665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.502 [2024-12-10 04:10:30.609680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.502 [2024-12-10 04:10:30.609687] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.502 [2024-12-10 04:10:30.609692] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.502 [2024-12-10 04:10:30.619919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.502 qpair failed and we were unable to recover it. 00:22:36.502 [2024-12-10 04:10:30.629648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.502 [2024-12-10 04:10:30.629684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.502 [2024-12-10 04:10:30.629702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.502 [2024-12-10 04:10:30.629709] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.502 [2024-12-10 04:10:30.629714] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.502 [2024-12-10 04:10:30.639865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.502 qpair failed and we were unable to recover it. 00:22:36.502 [2024-12-10 04:10:30.649613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.502 [2024-12-10 04:10:30.649652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.502 [2024-12-10 04:10:30.649667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.502 [2024-12-10 04:10:30.649674] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.502 [2024-12-10 04:10:30.649679] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.502 [2024-12-10 04:10:30.659980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.502 qpair failed and we were unable to recover it. 00:22:36.502 [2024-12-10 04:10:30.669808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.502 [2024-12-10 04:10:30.669841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.502 [2024-12-10 04:10:30.669856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.502 [2024-12-10 04:10:30.669863] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.502 [2024-12-10 04:10:30.669869] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.502 [2024-12-10 04:10:30.680112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.502 qpair failed and we were unable to recover it. 00:22:36.502 [2024-12-10 04:10:30.689733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.502 [2024-12-10 04:10:30.689769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.502 [2024-12-10 04:10:30.689784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.502 [2024-12-10 04:10:30.689791] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.502 [2024-12-10 04:10:30.689797] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.502 [2024-12-10 04:10:30.700135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.502 qpair failed and we were unable to recover it. 00:22:36.502 [2024-12-10 04:10:30.709931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.502 [2024-12-10 04:10:30.709966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.502 [2024-12-10 04:10:30.709981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.502 [2024-12-10 04:10:30.709991] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.502 [2024-12-10 04:10:30.709997] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.502 [2024-12-10 04:10:30.720225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.502 qpair failed and we were unable to recover it. 00:22:36.502 [2024-12-10 04:10:30.729954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.502 [2024-12-10 04:10:30.729988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.502 [2024-12-10 04:10:30.730003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.502 [2024-12-10 04:10:30.730009] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.502 [2024-12-10 04:10:30.730015] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.502 [2024-12-10 04:10:30.740290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.502 qpair failed and we were unable to recover it. 00:22:36.502 [2024-12-10 04:10:30.749984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.502 [2024-12-10 04:10:30.750023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.502 [2024-12-10 04:10:30.750038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.502 [2024-12-10 04:10:30.750044] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.502 [2024-12-10 04:10:30.750050] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.502 [2024-12-10 04:10:30.760257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.502 qpair failed and we were unable to recover it. 00:22:36.502 [2024-12-10 04:10:30.770012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.502 [2024-12-10 04:10:30.770052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.502 [2024-12-10 04:10:30.770066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.502 [2024-12-10 04:10:30.770072] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.502 [2024-12-10 04:10:30.770078] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.502 [2024-12-10 04:10:30.780366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.502 qpair failed and we were unable to recover it. 00:22:36.502 [2024-12-10 04:10:30.790009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.502 [2024-12-10 04:10:30.790050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.502 [2024-12-10 04:10:30.790065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.502 [2024-12-10 04:10:30.790072] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.502 [2024-12-10 04:10:30.790077] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.502 [2024-12-10 04:10:30.800400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.502 qpair failed and we were unable to recover it. 00:22:36.502 [2024-12-10 04:10:30.810027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.502 [2024-12-10 04:10:30.810063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.502 [2024-12-10 04:10:30.810078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.502 [2024-12-10 04:10:30.810085] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.502 [2024-12-10 04:10:30.810091] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.503 [2024-12-10 04:10:30.820390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.503 qpair failed and we were unable to recover it. 00:22:36.503 [2024-12-10 04:10:30.830131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.503 [2024-12-10 04:10:30.830164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.503 [2024-12-10 04:10:30.830179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.503 [2024-12-10 04:10:30.830185] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.503 [2024-12-10 04:10:30.830191] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.503 [2024-12-10 04:10:30.840392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.503 qpair failed and we were unable to recover it. 00:22:36.503 [2024-12-10 04:10:30.850248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.503 [2024-12-10 04:10:30.850292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.503 [2024-12-10 04:10:30.850307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.503 [2024-12-10 04:10:30.850314] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.503 [2024-12-10 04:10:30.850319] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.503 [2024-12-10 04:10:30.860541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.503 qpair failed and we were unable to recover it. 00:22:36.786 [2024-12-10 04:10:30.870284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.786 [2024-12-10 04:10:30.870327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.786 [2024-12-10 04:10:30.870342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.786 [2024-12-10 04:10:30.870348] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.786 [2024-12-10 04:10:30.870354] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.786 [2024-12-10 04:10:30.880578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.786 qpair failed and we were unable to recover it. 00:22:36.786 [2024-12-10 04:10:30.890235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.786 [2024-12-10 04:10:30.890275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.786 [2024-12-10 04:10:30.890290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.786 [2024-12-10 04:10:30.890297] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.786 [2024-12-10 04:10:30.890303] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.786 [2024-12-10 04:10:30.900589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.786 qpair failed and we were unable to recover it. 00:22:36.786 [2024-12-10 04:10:30.910419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.786 [2024-12-10 04:10:30.910454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.786 [2024-12-10 04:10:30.910469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.786 [2024-12-10 04:10:30.910476] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.786 [2024-12-10 04:10:30.910482] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.786 [2024-12-10 04:10:30.920829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.786 qpair failed and we were unable to recover it. 00:22:36.786 [2024-12-10 04:10:30.930525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.786 [2024-12-10 04:10:30.930566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.786 [2024-12-10 04:10:30.930581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.786 [2024-12-10 04:10:30.930588] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.786 [2024-12-10 04:10:30.930593] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.786 [2024-12-10 04:10:30.940903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.786 qpair failed and we were unable to recover it. 00:22:36.786 [2024-12-10 04:10:30.950489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.786 [2024-12-10 04:10:30.950528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.786 [2024-12-10 04:10:30.950543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.786 [2024-12-10 04:10:30.950550] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.786 [2024-12-10 04:10:30.950555] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.786 [2024-12-10 04:10:30.960812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.786 qpair failed and we were unable to recover it. 00:22:36.786 [2024-12-10 04:10:30.970589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.786 [2024-12-10 04:10:30.970627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.786 [2024-12-10 04:10:30.970648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.787 [2024-12-10 04:10:30.970655] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.787 [2024-12-10 04:10:30.970660] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.787 [2024-12-10 04:10:30.981006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.787 qpair failed and we were unable to recover it. 00:22:36.787 [2024-12-10 04:10:30.990612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.787 [2024-12-10 04:10:30.990649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.787 [2024-12-10 04:10:30.990664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.787 [2024-12-10 04:10:30.990670] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.787 [2024-12-10 04:10:30.990676] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.787 [2024-12-10 04:10:31.001002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.787 qpair failed and we were unable to recover it. 00:22:36.787 [2024-12-10 04:10:31.010685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.787 [2024-12-10 04:10:31.010721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.787 [2024-12-10 04:10:31.010736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.787 [2024-12-10 04:10:31.010742] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.787 [2024-12-10 04:10:31.010748] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.787 [2024-12-10 04:10:31.021016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.787 qpair failed and we were unable to recover it. 00:22:36.787 [2024-12-10 04:10:31.030762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.787 [2024-12-10 04:10:31.030799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.787 [2024-12-10 04:10:31.030814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.787 [2024-12-10 04:10:31.030821] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.787 [2024-12-10 04:10:31.030827] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.787 [2024-12-10 04:10:31.041224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.787 qpair failed and we were unable to recover it. 00:22:36.787 [2024-12-10 04:10:31.050866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.787 [2024-12-10 04:10:31.050905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.787 [2024-12-10 04:10:31.050920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.787 [2024-12-10 04:10:31.050927] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.787 [2024-12-10 04:10:31.050936] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.787 [2024-12-10 04:10:31.061150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.787 qpair failed and we were unable to recover it. 00:22:36.787 [2024-12-10 04:10:31.070910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.787 [2024-12-10 04:10:31.070942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.787 [2024-12-10 04:10:31.070956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.787 [2024-12-10 04:10:31.070963] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.787 [2024-12-10 04:10:31.070969] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.787 [2024-12-10 04:10:31.081290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.787 qpair failed and we were unable to recover it. 00:22:36.787 [2024-12-10 04:10:31.090941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.787 [2024-12-10 04:10:31.090979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.787 [2024-12-10 04:10:31.090993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.787 [2024-12-10 04:10:31.091000] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.787 [2024-12-10 04:10:31.091006] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.787 [2024-12-10 04:10:31.101108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.787 qpair failed and we were unable to recover it. 00:22:36.787 [2024-12-10 04:10:31.111013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.787 [2024-12-10 04:10:31.111052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.787 [2024-12-10 04:10:31.111066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.787 [2024-12-10 04:10:31.111073] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.787 [2024-12-10 04:10:31.111078] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.787 [2024-12-10 04:10:31.121366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.787 qpair failed and we were unable to recover it. 00:22:36.787 [2024-12-10 04:10:31.131278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.787 [2024-12-10 04:10:31.131316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.787 [2024-12-10 04:10:31.131331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.787 [2024-12-10 04:10:31.131338] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.787 [2024-12-10 04:10:31.131343] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.787 [2024-12-10 04:10:31.141350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.787 qpair failed and we were unable to recover it. 00:22:36.787 [2024-12-10 04:10:31.151160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.787 [2024-12-10 04:10:31.151197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.787 [2024-12-10 04:10:31.151212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.787 [2024-12-10 04:10:31.151219] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.787 [2024-12-10 04:10:31.151225] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:36.787 [2024-12-10 04:10:31.161465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:36.787 qpair failed and we were unable to recover it. 00:22:37.114 [2024-12-10 04:10:31.171132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.114 [2024-12-10 04:10:31.171171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.114 [2024-12-10 04:10:31.171186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.114 [2024-12-10 04:10:31.171192] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.114 [2024-12-10 04:10:31.171198] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.114 [2024-12-10 04:10:31.181469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.114 qpair failed and we were unable to recover it. 00:22:37.114 [2024-12-10 04:10:31.191285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.114 [2024-12-10 04:10:31.191326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.114 [2024-12-10 04:10:31.191342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.114 [2024-12-10 04:10:31.191348] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.114 [2024-12-10 04:10:31.191354] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.114 [2024-12-10 04:10:31.201842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.114 qpair failed and we were unable to recover it. 00:22:37.114 [2024-12-10 04:10:31.211280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.114 [2024-12-10 04:10:31.211314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.114 [2024-12-10 04:10:31.211328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.114 [2024-12-10 04:10:31.211335] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.114 [2024-12-10 04:10:31.211340] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.114 [2024-12-10 04:10:31.221669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.114 qpair failed and we were unable to recover it. 00:22:37.114 [2024-12-10 04:10:31.231475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.114 [2024-12-10 04:10:31.231522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.114 [2024-12-10 04:10:31.231536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.114 [2024-12-10 04:10:31.231542] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.114 [2024-12-10 04:10:31.231548] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.114 [2024-12-10 04:10:31.241628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.114 qpair failed and we were unable to recover it. 00:22:37.114 [2024-12-10 04:10:31.251416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.114 [2024-12-10 04:10:31.251454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.114 [2024-12-10 04:10:31.251469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.114 [2024-12-10 04:10:31.251476] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.114 [2024-12-10 04:10:31.251481] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.114 [2024-12-10 04:10:31.261811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.114 qpair failed and we were unable to recover it. 00:22:37.114 [2024-12-10 04:10:31.271528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.114 [2024-12-10 04:10:31.271565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.114 [2024-12-10 04:10:31.271580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.114 [2024-12-10 04:10:31.271586] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.114 [2024-12-10 04:10:31.271592] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.114 [2024-12-10 04:10:31.281837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.114 qpair failed and we were unable to recover it. 00:22:37.115 [2024-12-10 04:10:31.291648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.115 [2024-12-10 04:10:31.291688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.115 [2024-12-10 04:10:31.291702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.115 [2024-12-10 04:10:31.291709] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.115 [2024-12-10 04:10:31.291715] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.115 [2024-12-10 04:10:31.301794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.115 qpair failed and we were unable to recover it. 00:22:37.115 [2024-12-10 04:10:31.311604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.115 [2024-12-10 04:10:31.311645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.115 [2024-12-10 04:10:31.311663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.115 [2024-12-10 04:10:31.311669] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.115 [2024-12-10 04:10:31.311675] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.115 [2024-12-10 04:10:31.321821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.115 qpair failed and we were unable to recover it. 00:22:37.115 [2024-12-10 04:10:31.331817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.115 [2024-12-10 04:10:31.331855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.115 [2024-12-10 04:10:31.331870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.115 [2024-12-10 04:10:31.331876] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.115 [2024-12-10 04:10:31.331882] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.115 [2024-12-10 04:10:31.341915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.115 qpair failed and we were unable to recover it. 00:22:37.115 [2024-12-10 04:10:31.351651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.115 [2024-12-10 04:10:31.351691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.115 [2024-12-10 04:10:31.351706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.115 [2024-12-10 04:10:31.351713] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.115 [2024-12-10 04:10:31.351719] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.115 [2024-12-10 04:10:31.361896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.115 qpair failed and we were unable to recover it. 00:22:37.115 [2024-12-10 04:10:31.371786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.115 [2024-12-10 04:10:31.371823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.115 [2024-12-10 04:10:31.371838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.115 [2024-12-10 04:10:31.371845] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.115 [2024-12-10 04:10:31.371850] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.115 [2024-12-10 04:10:31.382190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.115 qpair failed and we were unable to recover it. 00:22:37.115 [2024-12-10 04:10:31.391857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.115 [2024-12-10 04:10:31.391895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.115 [2024-12-10 04:10:31.391910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.115 [2024-12-10 04:10:31.391917] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.115 [2024-12-10 04:10:31.391926] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.115 [2024-12-10 04:10:31.402171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.115 qpair failed and we were unable to recover it. 00:22:37.115 [2024-12-10 04:10:31.411817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.115 [2024-12-10 04:10:31.411854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.115 [2024-12-10 04:10:31.411869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.115 [2024-12-10 04:10:31.411875] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.115 [2024-12-10 04:10:31.411881] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.115 [2024-12-10 04:10:31.422224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.115 qpair failed and we were unable to recover it. 00:22:37.115 [2024-12-10 04:10:31.431923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.115 [2024-12-10 04:10:31.431957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.115 [2024-12-10 04:10:31.431972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.115 [2024-12-10 04:10:31.431979] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.115 [2024-12-10 04:10:31.431985] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.115 [2024-12-10 04:10:31.442241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.115 qpair failed and we were unable to recover it. 00:22:37.115 [2024-12-10 04:10:31.451986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.115 [2024-12-10 04:10:31.452023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.115 [2024-12-10 04:10:31.452038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.115 [2024-12-10 04:10:31.452045] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.115 [2024-12-10 04:10:31.452050] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.115 [2024-12-10 04:10:31.462206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.115 qpair failed and we were unable to recover it. 00:22:37.115 [2024-12-10 04:10:31.472002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.115 [2024-12-10 04:10:31.472034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.115 [2024-12-10 04:10:31.472049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.115 [2024-12-10 04:10:31.472055] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.115 [2024-12-10 04:10:31.472061] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.115 [2024-12-10 04:10:31.482327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.115 qpair failed and we were unable to recover it. 00:22:37.405 [2024-12-10 04:10:31.492050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.405 [2024-12-10 04:10:31.492087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.405 [2024-12-10 04:10:31.492102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.405 [2024-12-10 04:10:31.492109] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.405 [2024-12-10 04:10:31.492115] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.405 [2024-12-10 04:10:31.502378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.405 qpair failed and we were unable to recover it. 00:22:37.405 [2024-12-10 04:10:31.512071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.405 [2024-12-10 04:10:31.512108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.405 [2024-12-10 04:10:31.512122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.405 [2024-12-10 04:10:31.512129] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.405 [2024-12-10 04:10:31.512134] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.405 [2024-12-10 04:10:31.522356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.405 qpair failed and we were unable to recover it. 00:22:37.405 [2024-12-10 04:10:31.532139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.405 [2024-12-10 04:10:31.532178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.405 [2024-12-10 04:10:31.532193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.405 [2024-12-10 04:10:31.532199] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.405 [2024-12-10 04:10:31.532205] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.405 [2024-12-10 04:10:31.542521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.405 qpair failed and we were unable to recover it. 00:22:37.405 [2024-12-10 04:10:31.552231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.405 [2024-12-10 04:10:31.552269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.405 [2024-12-10 04:10:31.552285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.405 [2024-12-10 04:10:31.552292] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.405 [2024-12-10 04:10:31.552297] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.405 [2024-12-10 04:10:31.562458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.405 qpair failed and we were unable to recover it. 00:22:37.405 [2024-12-10 04:10:31.572297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.405 [2024-12-10 04:10:31.572335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.405 [2024-12-10 04:10:31.572350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.405 [2024-12-10 04:10:31.572357] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.405 [2024-12-10 04:10:31.572363] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.405 [2024-12-10 04:10:31.582729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.405 qpair failed and we were unable to recover it. 00:22:37.405 [2024-12-10 04:10:31.592323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.405 [2024-12-10 04:10:31.592356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.405 [2024-12-10 04:10:31.592371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.405 [2024-12-10 04:10:31.592378] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.405 [2024-12-10 04:10:31.592384] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.405 [2024-12-10 04:10:31.602807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.405 qpair failed and we were unable to recover it. 00:22:37.405 [2024-12-10 04:10:31.612405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.405 [2024-12-10 04:10:31.612447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.405 [2024-12-10 04:10:31.612462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.405 [2024-12-10 04:10:31.612468] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.405 [2024-12-10 04:10:31.612474] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.405 [2024-12-10 04:10:31.622650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.405 qpair failed and we were unable to recover it. 00:22:37.405 [2024-12-10 04:10:31.632466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.405 [2024-12-10 04:10:31.632503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.405 [2024-12-10 04:10:31.632519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.405 [2024-12-10 04:10:31.632525] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.405 [2024-12-10 04:10:31.632531] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.405 [2024-12-10 04:10:31.642776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.405 qpair failed and we were unable to recover it. 00:22:37.405 [2024-12-10 04:10:31.652517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.406 [2024-12-10 04:10:31.652556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.406 [2024-12-10 04:10:31.652574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.406 [2024-12-10 04:10:31.652582] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.406 [2024-12-10 04:10:31.652588] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.406 [2024-12-10 04:10:31.662816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.406 qpair failed and we were unable to recover it. 00:22:37.406 [2024-12-10 04:10:31.672625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.406 [2024-12-10 04:10:31.672668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.406 [2024-12-10 04:10:31.672683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.406 [2024-12-10 04:10:31.672689] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.406 [2024-12-10 04:10:31.672695] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.406 [2024-12-10 04:10:31.682791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.406 qpair failed and we were unable to recover it. 00:22:37.406 [2024-12-10 04:10:31.692615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.406 [2024-12-10 04:10:31.692651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.406 [2024-12-10 04:10:31.692666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.406 [2024-12-10 04:10:31.692673] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.406 [2024-12-10 04:10:31.692679] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.406 [2024-12-10 04:10:31.703017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.406 qpair failed and we were unable to recover it. 00:22:37.406 [2024-12-10 04:10:31.712732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.406 [2024-12-10 04:10:31.712771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.406 [2024-12-10 04:10:31.712785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.406 [2024-12-10 04:10:31.712792] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.406 [2024-12-10 04:10:31.712797] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.406 [2024-12-10 04:10:31.723191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.406 qpair failed and we were unable to recover it. 00:22:37.406 [2024-12-10 04:10:31.732822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.406 [2024-12-10 04:10:31.732858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.406 [2024-12-10 04:10:31.732872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.406 [2024-12-10 04:10:31.732878] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.406 [2024-12-10 04:10:31.732888] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.406 [2024-12-10 04:10:31.743041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.406 qpair failed and we were unable to recover it. 00:22:37.406 [2024-12-10 04:10:31.752875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.406 [2024-12-10 04:10:31.752914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.406 [2024-12-10 04:10:31.752929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.406 [2024-12-10 04:10:31.752935] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.406 [2024-12-10 04:10:31.752941] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.406 [2024-12-10 04:10:31.763221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.406 qpair failed and we were unable to recover it. 00:22:37.406 [2024-12-10 04:10:31.772837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.406 [2024-12-10 04:10:31.772870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.406 [2024-12-10 04:10:31.772885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.406 [2024-12-10 04:10:31.772891] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.406 [2024-12-10 04:10:31.772897] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.406 [2024-12-10 04:10:31.783256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.406 qpair failed and we were unable to recover it. 00:22:37.666 [2024-12-10 04:10:31.792977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.666 [2024-12-10 04:10:31.793017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.666 [2024-12-10 04:10:31.793031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.666 [2024-12-10 04:10:31.793037] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.666 [2024-12-10 04:10:31.793043] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.666 [2024-12-10 04:10:31.803212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.666 qpair failed and we were unable to recover it. 00:22:37.666 [2024-12-10 04:10:31.813014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.666 [2024-12-10 04:10:31.813050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.666 [2024-12-10 04:10:31.813064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.666 [2024-12-10 04:10:31.813071] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.666 [2024-12-10 04:10:31.813076] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.666 [2024-12-10 04:10:31.823328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.666 qpair failed and we were unable to recover it. 00:22:37.666 [2024-12-10 04:10:31.833110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.666 [2024-12-10 04:10:31.833145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.666 [2024-12-10 04:10:31.833159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.666 [2024-12-10 04:10:31.833166] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.666 [2024-12-10 04:10:31.833171] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.666 [2024-12-10 04:10:31.843779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.666 qpair failed and we were unable to recover it. 00:22:37.666 [2024-12-10 04:10:31.853140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.666 [2024-12-10 04:10:31.853178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.666 [2024-12-10 04:10:31.853193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.666 [2024-12-10 04:10:31.853200] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.666 [2024-12-10 04:10:31.853205] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.666 [2024-12-10 04:10:31.863416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.666 qpair failed and we were unable to recover it. 00:22:37.666 [2024-12-10 04:10:31.873137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.666 [2024-12-10 04:10:31.873171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.666 [2024-12-10 04:10:31.873186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.666 [2024-12-10 04:10:31.873192] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.666 [2024-12-10 04:10:31.873198] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.666 [2024-12-10 04:10:31.883505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.666 qpair failed and we were unable to recover it. 00:22:37.666 [2024-12-10 04:10:31.893296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.666 [2024-12-10 04:10:31.893333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.666 [2024-12-10 04:10:31.893348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.666 [2024-12-10 04:10:31.893354] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.666 [2024-12-10 04:10:31.893360] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.666 [2024-12-10 04:10:31.903675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.666 qpair failed and we were unable to recover it. 00:22:37.666 [2024-12-10 04:10:31.913279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.666 [2024-12-10 04:10:31.913319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.666 [2024-12-10 04:10:31.913334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.666 [2024-12-10 04:10:31.913340] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.666 [2024-12-10 04:10:31.913346] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.667 [2024-12-10 04:10:31.923761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.667 qpair failed and we were unable to recover it. 00:22:37.667 [2024-12-10 04:10:31.933447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.667 [2024-12-10 04:10:31.933489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.667 [2024-12-10 04:10:31.933505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.667 [2024-12-10 04:10:31.933513] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.667 [2024-12-10 04:10:31.933520] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.667 [2024-12-10 04:10:31.943857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.667 qpair failed and we were unable to recover it. 00:22:37.667 [2024-12-10 04:10:31.953385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.667 [2024-12-10 04:10:31.953417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.667 [2024-12-10 04:10:31.953432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.667 [2024-12-10 04:10:31.953440] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.667 [2024-12-10 04:10:31.953447] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.667 [2024-12-10 04:10:31.963821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.667 qpair failed and we were unable to recover it. 00:22:37.667 [2024-12-10 04:10:31.973410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.667 [2024-12-10 04:10:31.973447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.667 [2024-12-10 04:10:31.973462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.667 [2024-12-10 04:10:31.973468] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.667 [2024-12-10 04:10:31.973474] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.667 [2024-12-10 04:10:31.983767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.667 qpair failed and we were unable to recover it. 00:22:37.667 [2024-12-10 04:10:31.993539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.667 [2024-12-10 04:10:31.993576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.667 [2024-12-10 04:10:31.993591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.667 [2024-12-10 04:10:31.993600] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.667 [2024-12-10 04:10:31.993606] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.667 [2024-12-10 04:10:32.003989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.667 qpair failed and we were unable to recover it. 00:22:37.667 [2024-12-10 04:10:32.013560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.667 [2024-12-10 04:10:32.013592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.667 [2024-12-10 04:10:32.013607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.667 [2024-12-10 04:10:32.013614] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.667 [2024-12-10 04:10:32.013620] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.667 [2024-12-10 04:10:32.023921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.667 qpair failed and we were unable to recover it. 00:22:37.667 [2024-12-10 04:10:32.033640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.667 [2024-12-10 04:10:32.033674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.667 [2024-12-10 04:10:32.033689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.667 [2024-12-10 04:10:32.033695] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.667 [2024-12-10 04:10:32.033700] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.667 [2024-12-10 04:10:32.044101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.667 qpair failed and we were unable to recover it. 00:22:37.927 [2024-12-10 04:10:32.053694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.927 [2024-12-10 04:10:32.053729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.927 [2024-12-10 04:10:32.053744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.927 [2024-12-10 04:10:32.053751] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.927 [2024-12-10 04:10:32.053756] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.927 [2024-12-10 04:10:32.064053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.927 qpair failed and we were unable to recover it. 00:22:37.927 [2024-12-10 04:10:32.073802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.927 [2024-12-10 04:10:32.073837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.927 [2024-12-10 04:10:32.073853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.927 [2024-12-10 04:10:32.073859] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.927 [2024-12-10 04:10:32.073865] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.927 [2024-12-10 04:10:32.084185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.927 qpair failed and we were unable to recover it. 00:22:37.927 [2024-12-10 04:10:32.093851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.927 [2024-12-10 04:10:32.093888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.927 [2024-12-10 04:10:32.093903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.927 [2024-12-10 04:10:32.093910] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.927 [2024-12-10 04:10:32.093915] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.927 [2024-12-10 04:10:32.104061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.927 qpair failed and we were unable to recover it. 00:22:37.927 [2024-12-10 04:10:32.113940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.927 [2024-12-10 04:10:32.113978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.927 [2024-12-10 04:10:32.113992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.927 [2024-12-10 04:10:32.113999] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.927 [2024-12-10 04:10:32.114005] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.927 [2024-12-10 04:10:32.124339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.927 qpair failed and we were unable to recover it. 00:22:37.927 [2024-12-10 04:10:32.133875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.927 [2024-12-10 04:10:32.133913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.927 [2024-12-10 04:10:32.133928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.927 [2024-12-10 04:10:32.133934] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.927 [2024-12-10 04:10:32.133940] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.927 [2024-12-10 04:10:32.144088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.927 qpair failed and we were unable to recover it. 00:22:37.927 [2024-12-10 04:10:32.153993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.927 [2024-12-10 04:10:32.154030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.927 [2024-12-10 04:10:32.154045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.927 [2024-12-10 04:10:32.154051] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.927 [2024-12-10 04:10:32.154057] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.927 [2024-12-10 04:10:32.164356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.927 qpair failed and we were unable to recover it. 00:22:37.927 [2024-12-10 04:10:32.174104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.927 [2024-12-10 04:10:32.174142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.928 [2024-12-10 04:10:32.174157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.928 [2024-12-10 04:10:32.174164] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.928 [2024-12-10 04:10:32.174169] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.928 [2024-12-10 04:10:32.184362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.928 qpair failed and we were unable to recover it. 00:22:37.928 [2024-12-10 04:10:32.194097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.928 [2024-12-10 04:10:32.194128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.928 [2024-12-10 04:10:32.194143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.928 [2024-12-10 04:10:32.194149] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.928 [2024-12-10 04:10:32.194155] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.928 [2024-12-10 04:10:32.204438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.928 qpair failed and we were unable to recover it. 00:22:37.928 [2024-12-10 04:10:32.214119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.928 [2024-12-10 04:10:32.214156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.928 [2024-12-10 04:10:32.214171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.928 [2024-12-10 04:10:32.214178] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.928 [2024-12-10 04:10:32.214184] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.928 [2024-12-10 04:10:32.224570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.928 qpair failed and we were unable to recover it. 00:22:37.928 [2024-12-10 04:10:32.234271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.928 [2024-12-10 04:10:32.234306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.928 [2024-12-10 04:10:32.234321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.928 [2024-12-10 04:10:32.234328] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.928 [2024-12-10 04:10:32.234334] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.928 [2024-12-10 04:10:32.244451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.928 qpair failed and we were unable to recover it. 00:22:37.928 [2024-12-10 04:10:32.254316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.928 [2024-12-10 04:10:32.254355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.928 [2024-12-10 04:10:32.254372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.928 [2024-12-10 04:10:32.254379] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.928 [2024-12-10 04:10:32.254384] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.928 [2024-12-10 04:10:32.264606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.928 qpair failed and we were unable to recover it. 00:22:37.928 [2024-12-10 04:10:32.274285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.928 [2024-12-10 04:10:32.274322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.928 [2024-12-10 04:10:32.274336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.928 [2024-12-10 04:10:32.274343] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.928 [2024-12-10 04:10:32.274349] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.928 [2024-12-10 04:10:32.284697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.928 qpair failed and we were unable to recover it. 00:22:37.928 [2024-12-10 04:10:32.294440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.928 [2024-12-10 04:10:32.294477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.928 [2024-12-10 04:10:32.294492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.928 [2024-12-10 04:10:32.294498] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.928 [2024-12-10 04:10:32.294503] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:37.928 [2024-12-10 04:10:32.304626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.928 qpair failed and we were unable to recover it. 00:22:38.188 [2024-12-10 04:10:32.314406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.188 [2024-12-10 04:10:32.314446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.188 [2024-12-10 04:10:32.314461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.188 [2024-12-10 04:10:32.314467] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.188 [2024-12-10 04:10:32.314472] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.188 [2024-12-10 04:10:32.324779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.188 qpair failed and we were unable to recover it. 00:22:38.188 [2024-12-10 04:10:32.334453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.188 [2024-12-10 04:10:32.334486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.188 [2024-12-10 04:10:32.334501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.188 [2024-12-10 04:10:32.334510] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.188 [2024-12-10 04:10:32.334515] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.188 [2024-12-10 04:10:32.344827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.188 qpair failed and we were unable to recover it. 00:22:38.188 [2024-12-10 04:10:32.354501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.188 [2024-12-10 04:10:32.354536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.188 [2024-12-10 04:10:32.354552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.188 [2024-12-10 04:10:32.354558] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.188 [2024-12-10 04:10:32.354564] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.188 [2024-12-10 04:10:32.364929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.188 qpair failed and we were unable to recover it. 00:22:38.188 [2024-12-10 04:10:32.374633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.188 [2024-12-10 04:10:32.374669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.188 [2024-12-10 04:10:32.374685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.188 [2024-12-10 04:10:32.374691] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.188 [2024-12-10 04:10:32.374697] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.188 [2024-12-10 04:10:32.384860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.188 qpair failed and we were unable to recover it. 00:22:38.188 [2024-12-10 04:10:32.394765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.188 [2024-12-10 04:10:32.394804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.188 [2024-12-10 04:10:32.394819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.188 [2024-12-10 04:10:32.394825] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.188 [2024-12-10 04:10:32.394831] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.188 [2024-12-10 04:10:32.405020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.188 qpair failed and we were unable to recover it. 00:22:38.188 [2024-12-10 04:10:32.414828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.188 [2024-12-10 04:10:32.414866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.188 [2024-12-10 04:10:32.414880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.188 [2024-12-10 04:10:32.414887] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.188 [2024-12-10 04:10:32.414893] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.188 [2024-12-10 04:10:32.424960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.188 qpair failed and we were unable to recover it. 00:22:38.188 [2024-12-10 04:10:32.434898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.188 [2024-12-10 04:10:32.434938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.188 [2024-12-10 04:10:32.434953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.188 [2024-12-10 04:10:32.434960] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.188 [2024-12-10 04:10:32.434965] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.188 [2024-12-10 04:10:32.445134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.188 qpair failed and we were unable to recover it. 00:22:38.188 [2024-12-10 04:10:32.454863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.188 [2024-12-10 04:10:32.454902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.188 [2024-12-10 04:10:32.454917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.188 [2024-12-10 04:10:32.454923] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.188 [2024-12-10 04:10:32.454929] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.188 [2024-12-10 04:10:32.465155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.188 qpair failed and we were unable to recover it. 00:22:38.188 [2024-12-10 04:10:32.474959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.188 [2024-12-10 04:10:32.475000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.188 [2024-12-10 04:10:32.475014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.188 [2024-12-10 04:10:32.475021] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.188 [2024-12-10 04:10:32.475027] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.188 [2024-12-10 04:10:32.485722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.188 qpair failed and we were unable to recover it. 00:22:38.188 [2024-12-10 04:10:32.494929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.188 [2024-12-10 04:10:32.494964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.188 [2024-12-10 04:10:32.494978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.188 [2024-12-10 04:10:32.494985] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.188 [2024-12-10 04:10:32.494991] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.188 [2024-12-10 04:10:32.505244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.188 qpair failed and we were unable to recover it. 00:22:38.188 [2024-12-10 04:10:32.515055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.188 [2024-12-10 04:10:32.515095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.188 [2024-12-10 04:10:32.515110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.188 [2024-12-10 04:10:32.515117] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.188 [2024-12-10 04:10:32.515122] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.188 [2024-12-10 04:10:32.525368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.188 qpair failed and we were unable to recover it. 00:22:38.188 [2024-12-10 04:10:32.535094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.188 [2024-12-10 04:10:32.535135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.188 [2024-12-10 04:10:32.535150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.188 [2024-12-10 04:10:32.535157] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.188 [2024-12-10 04:10:32.535162] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.189 [2024-12-10 04:10:32.545322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.189 qpair failed and we were unable to recover it. 00:22:38.189 [2024-12-10 04:10:32.555147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.189 [2024-12-10 04:10:32.555185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.189 [2024-12-10 04:10:32.555200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.189 [2024-12-10 04:10:32.555207] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.189 [2024-12-10 04:10:32.555213] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.189 [2024-12-10 04:10:32.565472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.189 qpair failed and we were unable to recover it. 00:22:38.449 [2024-12-10 04:10:32.575102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.449 [2024-12-10 04:10:32.575137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.449 [2024-12-10 04:10:32.575152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.449 [2024-12-10 04:10:32.575158] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.449 [2024-12-10 04:10:32.575164] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.449 [2024-12-10 04:10:32.585516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.449 qpair failed and we were unable to recover it. 00:22:38.449 [2024-12-10 04:10:32.595199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.449 [2024-12-10 04:10:32.595236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.449 [2024-12-10 04:10:32.595253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.449 [2024-12-10 04:10:32.595259] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.449 [2024-12-10 04:10:32.595265] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.449 [2024-12-10 04:10:32.605479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.449 qpair failed and we were unable to recover it. 00:22:38.449 [2024-12-10 04:10:32.615315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.449 [2024-12-10 04:10:32.615355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.449 [2024-12-10 04:10:32.615370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.449 [2024-12-10 04:10:32.615377] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.449 [2024-12-10 04:10:32.615382] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.449 [2024-12-10 04:10:32.625657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.449 qpair failed and we were unable to recover it. 00:22:38.449 [2024-12-10 04:10:32.635450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.449 [2024-12-10 04:10:32.635491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.449 [2024-12-10 04:10:32.635506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.449 [2024-12-10 04:10:32.635513] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.449 [2024-12-10 04:10:32.635519] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.449 [2024-12-10 04:10:32.645629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.449 qpair failed and we were unable to recover it. 00:22:38.449 [2024-12-10 04:10:32.655454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.449 [2024-12-10 04:10:32.655490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.449 [2024-12-10 04:10:32.655505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.449 [2024-12-10 04:10:32.655511] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.449 [2024-12-10 04:10:32.655517] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.449 [2024-12-10 04:10:32.665759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.449 qpair failed and we were unable to recover it. 00:22:38.449 [2024-12-10 04:10:32.675367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.449 [2024-12-10 04:10:32.675405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.449 [2024-12-10 04:10:32.675421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.449 [2024-12-10 04:10:32.675431] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.449 [2024-12-10 04:10:32.675437] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.449 [2024-12-10 04:10:32.685733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.449 qpair failed and we were unable to recover it. 00:22:38.449 [2024-12-10 04:10:32.695596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.449 [2024-12-10 04:10:32.695632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.449 [2024-12-10 04:10:32.695647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.449 [2024-12-10 04:10:32.695654] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.449 [2024-12-10 04:10:32.695659] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.449 [2024-12-10 04:10:32.705879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.449 qpair failed and we were unable to recover it. 00:22:38.449 [2024-12-10 04:10:32.715529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.449 [2024-12-10 04:10:32.715567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.449 [2024-12-10 04:10:32.715583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.449 [2024-12-10 04:10:32.715589] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.449 [2024-12-10 04:10:32.715595] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.449 [2024-12-10 04:10:32.725925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.449 qpair failed and we were unable to recover it. 00:22:38.449 [2024-12-10 04:10:32.735727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.449 [2024-12-10 04:10:32.735764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.449 [2024-12-10 04:10:32.735780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.449 [2024-12-10 04:10:32.735786] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.449 [2024-12-10 04:10:32.735792] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.449 [2024-12-10 04:10:32.746047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.449 qpair failed and we were unable to recover it. 00:22:38.449 [2024-12-10 04:10:32.755752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.450 [2024-12-10 04:10:32.755790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.450 [2024-12-10 04:10:32.755805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.450 [2024-12-10 04:10:32.755812] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.450 [2024-12-10 04:10:32.755818] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.450 [2024-12-10 04:10:32.766203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.450 qpair failed and we were unable to recover it. 00:22:38.450 [2024-12-10 04:10:32.775729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.450 [2024-12-10 04:10:32.775765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.450 [2024-12-10 04:10:32.775779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.450 [2024-12-10 04:10:32.775786] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.450 [2024-12-10 04:10:32.775792] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.450 [2024-12-10 04:10:32.786227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.450 qpair failed and we were unable to recover it. 00:22:38.450 [2024-12-10 04:10:32.795821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.450 [2024-12-10 04:10:32.795862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.450 [2024-12-10 04:10:32.795877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.450 [2024-12-10 04:10:32.795884] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.450 [2024-12-10 04:10:32.795890] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.450 [2024-12-10 04:10:32.806145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.450 qpair failed and we were unable to recover it. 00:22:38.450 [2024-12-10 04:10:32.816038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.450 [2024-12-10 04:10:32.816074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.450 [2024-12-10 04:10:32.816089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.450 [2024-12-10 04:10:32.816095] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.450 [2024-12-10 04:10:32.816101] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.450 [2024-12-10 04:10:32.826285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.450 qpair failed and we were unable to recover it. 00:22:38.709 [2024-12-10 04:10:32.835947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.709 [2024-12-10 04:10:32.835986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.709 [2024-12-10 04:10:32.836001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.709 [2024-12-10 04:10:32.836007] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.709 [2024-12-10 04:10:32.836013] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.709 [2024-12-10 04:10:32.846315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.709 qpair failed and we were unable to recover it. 00:22:38.709 [2024-12-10 04:10:32.855907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.709 [2024-12-10 04:10:32.855945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.709 [2024-12-10 04:10:32.855960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.709 [2024-12-10 04:10:32.855966] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.709 [2024-12-10 04:10:32.855972] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.709 [2024-12-10 04:10:32.866339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.709 qpair failed and we were unable to recover it. 00:22:38.709 [2024-12-10 04:10:32.876149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.709 [2024-12-10 04:10:32.876186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.709 [2024-12-10 04:10:32.876201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.709 [2024-12-10 04:10:32.876207] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.709 [2024-12-10 04:10:32.876213] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.709 [2024-12-10 04:10:32.886490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.709 qpair failed and we were unable to recover it. 00:22:38.709 [2024-12-10 04:10:32.896193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.710 [2024-12-10 04:10:32.896224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.710 [2024-12-10 04:10:32.896238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.710 [2024-12-10 04:10:32.896245] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.710 [2024-12-10 04:10:32.896251] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.710 [2024-12-10 04:10:32.906488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.710 qpair failed and we were unable to recover it. 00:22:38.710 [2024-12-10 04:10:32.916236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.710 [2024-12-10 04:10:32.916271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.710 [2024-12-10 04:10:32.916287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.710 [2024-12-10 04:10:32.916293] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.710 [2024-12-10 04:10:32.916299] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.710 [2024-12-10 04:10:32.926611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.710 qpair failed and we were unable to recover it. 00:22:38.710 [2024-12-10 04:10:32.936345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.710 [2024-12-10 04:10:32.936383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.710 [2024-12-10 04:10:32.936401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.710 [2024-12-10 04:10:32.936407] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.710 [2024-12-10 04:10:32.936413] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.710 [2024-12-10 04:10:32.946583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.710 qpair failed and we were unable to recover it. 00:22:38.710 [2024-12-10 04:10:32.956434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.710 [2024-12-10 04:10:32.956471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.710 [2024-12-10 04:10:32.956486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.710 [2024-12-10 04:10:32.956492] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.710 [2024-12-10 04:10:32.956498] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.710 [2024-12-10 04:10:32.966731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.710 qpair failed and we were unable to recover it. 00:22:38.710 [2024-12-10 04:10:32.976397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.710 [2024-12-10 04:10:32.976434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.710 [2024-12-10 04:10:32.976451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.710 [2024-12-10 04:10:32.976458] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.710 [2024-12-10 04:10:32.976463] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.710 [2024-12-10 04:10:32.986574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.710 qpair failed and we were unable to recover it. 00:22:38.710 [2024-12-10 04:10:32.996444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.710 [2024-12-10 04:10:32.996479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.710 [2024-12-10 04:10:32.996494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.710 [2024-12-10 04:10:32.996500] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.710 [2024-12-10 04:10:32.996506] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.710 [2024-12-10 04:10:33.006715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.710 qpair failed and we were unable to recover it. 00:22:38.710 [2024-12-10 04:10:33.016617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.710 [2024-12-10 04:10:33.016655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.710 [2024-12-10 04:10:33.016669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.710 [2024-12-10 04:10:33.016675] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.710 [2024-12-10 04:10:33.016684] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.710 [2024-12-10 04:10:33.026722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.710 qpair failed and we were unable to recover it. 00:22:38.710 [2024-12-10 04:10:33.036685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.710 [2024-12-10 04:10:33.036724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.710 [2024-12-10 04:10:33.036738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.710 [2024-12-10 04:10:33.036745] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.710 [2024-12-10 04:10:33.036750] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.710 [2024-12-10 04:10:33.046951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.710 qpair failed and we were unable to recover it. 00:22:38.710 [2024-12-10 04:10:33.056669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.710 [2024-12-10 04:10:33.056709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.710 [2024-12-10 04:10:33.056724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.710 [2024-12-10 04:10:33.056730] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.710 [2024-12-10 04:10:33.056736] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.710 [2024-12-10 04:10:33.066945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.710 qpair failed and we were unable to recover it. 00:22:38.710 [2024-12-10 04:10:33.076739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.710 [2024-12-10 04:10:33.076776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.710 [2024-12-10 04:10:33.076790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.710 [2024-12-10 04:10:33.076797] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.710 [2024-12-10 04:10:33.076803] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.710 [2024-12-10 04:10:33.087050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.710 qpair failed and we were unable to recover it. 00:22:38.970 [2024-12-10 04:10:33.096795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.970 [2024-12-10 04:10:33.096832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.970 [2024-12-10 04:10:33.096847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.970 [2024-12-10 04:10:33.096853] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.970 [2024-12-10 04:10:33.096859] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.970 [2024-12-10 04:10:33.107158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.970 qpair failed and we were unable to recover it. 00:22:38.970 [2024-12-10 04:10:33.116835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.970 [2024-12-10 04:10:33.116870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.970 [2024-12-10 04:10:33.116885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.970 [2024-12-10 04:10:33.116891] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.970 [2024-12-10 04:10:33.116897] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.970 [2024-12-10 04:10:33.127307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.970 qpair failed and we were unable to recover it. 00:22:38.970 [2024-12-10 04:10:33.136893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.970 [2024-12-10 04:10:33.136930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.970 [2024-12-10 04:10:33.136945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.970 [2024-12-10 04:10:33.136951] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.970 [2024-12-10 04:10:33.136956] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.970 [2024-12-10 04:10:33.147081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.970 qpair failed and we were unable to recover it. 00:22:38.970 [2024-12-10 04:10:33.156902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.970 [2024-12-10 04:10:33.156939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.970 [2024-12-10 04:10:33.156953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.970 [2024-12-10 04:10:33.156960] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.970 [2024-12-10 04:10:33.156965] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.970 [2024-12-10 04:10:33.167300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.970 qpair failed and we were unable to recover it. 00:22:38.970 [2024-12-10 04:10:33.177008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.970 [2024-12-10 04:10:33.177044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.970 [2024-12-10 04:10:33.177058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.970 [2024-12-10 04:10:33.177065] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.970 [2024-12-10 04:10:33.177070] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.970 [2024-12-10 04:10:33.187392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.970 qpair failed and we were unable to recover it. 00:22:38.970 [2024-12-10 04:10:33.197003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.970 [2024-12-10 04:10:33.197048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.970 [2024-12-10 04:10:33.197063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.970 [2024-12-10 04:10:33.197069] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.970 [2024-12-10 04:10:33.197075] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.970 [2024-12-10 04:10:33.207501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.970 qpair failed and we were unable to recover it. 00:22:38.970 [2024-12-10 04:10:33.217164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.970 [2024-12-10 04:10:33.217202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.970 [2024-12-10 04:10:33.217216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.970 [2024-12-10 04:10:33.217223] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.970 [2024-12-10 04:10:33.217228] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.970 [2024-12-10 04:10:33.227310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.970 qpair failed and we were unable to recover it. 00:22:38.970 [2024-12-10 04:10:33.237219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.970 [2024-12-10 04:10:33.237256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.970 [2024-12-10 04:10:33.237280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.970 [2024-12-10 04:10:33.237286] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.970 [2024-12-10 04:10:33.237292] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.970 [2024-12-10 04:10:33.247445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.970 qpair failed and we were unable to recover it. 00:22:38.970 [2024-12-10 04:10:33.257107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.970 [2024-12-10 04:10:33.257145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.970 [2024-12-10 04:10:33.257160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.970 [2024-12-10 04:10:33.257166] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.970 [2024-12-10 04:10:33.257171] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.970 [2024-12-10 04:10:33.267472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.970 qpair failed and we were unable to recover it. 00:22:38.970 [2024-12-10 04:10:33.277149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.970 [2024-12-10 04:10:33.277188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.970 [2024-12-10 04:10:33.277206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.970 [2024-12-10 04:10:33.277213] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.970 [2024-12-10 04:10:33.277218] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.970 [2024-12-10 04:10:33.287541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.970 qpair failed and we were unable to recover it. 00:22:38.970 [2024-12-10 04:10:33.297366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.970 [2024-12-10 04:10:33.297405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.970 [2024-12-10 04:10:33.297421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.970 [2024-12-10 04:10:33.297427] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.971 [2024-12-10 04:10:33.297433] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.971 [2024-12-10 04:10:33.307703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.971 qpair failed and we were unable to recover it. 00:22:38.971 [2024-12-10 04:10:33.317384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.971 [2024-12-10 04:10:33.317425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.971 [2024-12-10 04:10:33.317439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.971 [2024-12-10 04:10:33.317446] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.971 [2024-12-10 04:10:33.317451] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.971 [2024-12-10 04:10:33.327544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.971 qpair failed and we were unable to recover it. 00:22:38.971 [2024-12-10 04:10:33.337398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.971 [2024-12-10 04:10:33.337435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.971 [2024-12-10 04:10:33.337450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.971 [2024-12-10 04:10:33.337456] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.971 [2024-12-10 04:10:33.337461] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:38.971 [2024-12-10 04:10:33.347785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.971 qpair failed and we were unable to recover it. 00:22:39.230 [2024-12-10 04:10:33.357517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.230 [2024-12-10 04:10:33.357558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.230 [2024-12-10 04:10:33.357572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.230 [2024-12-10 04:10:33.357579] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.230 [2024-12-10 04:10:33.357587] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.230 [2024-12-10 04:10:33.367766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.230 qpair failed and we were unable to recover it. 00:22:39.230 [2024-12-10 04:10:33.377556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.230 [2024-12-10 04:10:33.377594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.230 [2024-12-10 04:10:33.377608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.230 [2024-12-10 04:10:33.377614] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.230 [2024-12-10 04:10:33.377620] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.230 [2024-12-10 04:10:33.387780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.230 qpair failed and we were unable to recover it. 00:22:39.230 [2024-12-10 04:10:33.397475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.230 [2024-12-10 04:10:33.397511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.230 [2024-12-10 04:10:33.397526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.230 [2024-12-10 04:10:33.397533] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.230 [2024-12-10 04:10:33.397538] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.230 [2024-12-10 04:10:33.407927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.230 qpair failed and we were unable to recover it. 00:22:39.230 [2024-12-10 04:10:33.417603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.231 [2024-12-10 04:10:33.417643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.231 [2024-12-10 04:10:33.417657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.231 [2024-12-10 04:10:33.417663] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.231 [2024-12-10 04:10:33.417669] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.231 [2024-12-10 04:10:33.427977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.231 qpair failed and we were unable to recover it. 00:22:39.231 [2024-12-10 04:10:33.437686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.231 [2024-12-10 04:10:33.437727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.231 [2024-12-10 04:10:33.437742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.231 [2024-12-10 04:10:33.437749] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.231 [2024-12-10 04:10:33.437755] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.231 [2024-12-10 04:10:33.447774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.231 qpair failed and we were unable to recover it. 00:22:39.231 [2024-12-10 04:10:33.457767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.231 [2024-12-10 04:10:33.457803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.231 [2024-12-10 04:10:33.457817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.231 [2024-12-10 04:10:33.457823] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.231 [2024-12-10 04:10:33.457830] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.231 [2024-12-10 04:10:33.467997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.231 qpair failed and we were unable to recover it. 00:22:39.231 [2024-12-10 04:10:33.477904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.231 [2024-12-10 04:10:33.477941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.231 [2024-12-10 04:10:33.477955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.231 [2024-12-10 04:10:33.477962] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.231 [2024-12-10 04:10:33.477968] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.231 [2024-12-10 04:10:33.488097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.231 qpair failed and we were unable to recover it. 00:22:39.231 [2024-12-10 04:10:33.497865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.231 [2024-12-10 04:10:33.497902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.231 [2024-12-10 04:10:33.497916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.231 [2024-12-10 04:10:33.497923] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.231 [2024-12-10 04:10:33.497929] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.231 [2024-12-10 04:10:33.508008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.231 qpair failed and we were unable to recover it. 00:22:39.231 [2024-12-10 04:10:33.517938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.231 [2024-12-10 04:10:33.517974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.231 [2024-12-10 04:10:33.517989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.231 [2024-12-10 04:10:33.517996] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.231 [2024-12-10 04:10:33.518001] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.231 [2024-12-10 04:10:33.528214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.231 qpair failed and we were unable to recover it. 00:22:39.231 [2024-12-10 04:10:33.537986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.231 [2024-12-10 04:10:33.538025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.231 [2024-12-10 04:10:33.538040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.231 [2024-12-10 04:10:33.538046] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.231 [2024-12-10 04:10:33.538052] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.231 [2024-12-10 04:10:33.548261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.231 qpair failed and we were unable to recover it. 00:22:39.231 [2024-12-10 04:10:33.558093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.231 [2024-12-10 04:10:33.558123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.231 [2024-12-10 04:10:33.558137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.231 [2024-12-10 04:10:33.558144] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.231 [2024-12-10 04:10:33.558150] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.231 [2024-12-10 04:10:33.568399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.231 qpair failed and we were unable to recover it. 00:22:39.231 [2024-12-10 04:10:33.578079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.231 [2024-12-10 04:10:33.578115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.231 [2024-12-10 04:10:33.578129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.231 [2024-12-10 04:10:33.578136] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.231 [2024-12-10 04:10:33.578141] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.231 [2024-12-10 04:10:33.588379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.231 qpair failed and we were unable to recover it. 00:22:39.231 [2024-12-10 04:10:33.598167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.231 [2024-12-10 04:10:33.598202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.231 [2024-12-10 04:10:33.598217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.231 [2024-12-10 04:10:33.598223] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.231 [2024-12-10 04:10:33.598229] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.231 [2024-12-10 04:10:33.608448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.231 qpair failed and we were unable to recover it. 00:22:39.491 [2024-12-10 04:10:33.618216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.491 [2024-12-10 04:10:33.618254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.491 [2024-12-10 04:10:33.618272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.491 [2024-12-10 04:10:33.618283] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.491 [2024-12-10 04:10:33.618288] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.491 [2024-12-10 04:10:33.628311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.491 qpair failed and we were unable to recover it. 00:22:39.491 [2024-12-10 04:10:33.638274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.491 [2024-12-10 04:10:33.638308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.491 [2024-12-10 04:10:33.638323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.491 [2024-12-10 04:10:33.638330] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.491 [2024-12-10 04:10:33.638336] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.491 [2024-12-10 04:10:33.648474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.491 qpair failed and we were unable to recover it. 00:22:39.491 [2024-12-10 04:10:33.658287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.491 [2024-12-10 04:10:33.658323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.491 [2024-12-10 04:10:33.658338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.491 [2024-12-10 04:10:33.658344] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.491 [2024-12-10 04:10:33.658350] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.491 [2024-12-10 04:10:33.668741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.491 qpair failed and we were unable to recover it. 00:22:39.491 [2024-12-10 04:10:33.678428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.491 [2024-12-10 04:10:33.678464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.491 [2024-12-10 04:10:33.678478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.491 [2024-12-10 04:10:33.678484] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.491 [2024-12-10 04:10:33.678490] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.491 [2024-12-10 04:10:33.688788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.491 qpair failed and we were unable to recover it. 00:22:39.491 [2024-12-10 04:10:33.698417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.491 [2024-12-10 04:10:33.698454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.491 [2024-12-10 04:10:33.698468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.491 [2024-12-10 04:10:33.698475] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.491 [2024-12-10 04:10:33.698484] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.491 [2024-12-10 04:10:33.708675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.491 qpair failed and we were unable to recover it. 00:22:39.491 [2024-12-10 04:10:33.718515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.491 [2024-12-10 04:10:33.718549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.491 [2024-12-10 04:10:33.718564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.491 [2024-12-10 04:10:33.718571] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.491 [2024-12-10 04:10:33.718576] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.491 [2024-12-10 04:10:33.728948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.491 qpair failed and we were unable to recover it. 00:22:39.491 [2024-12-10 04:10:33.738538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.491 [2024-12-10 04:10:33.738575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.491 [2024-12-10 04:10:33.738590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.491 [2024-12-10 04:10:33.738597] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.491 [2024-12-10 04:10:33.738602] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.491 [2024-12-10 04:10:33.748924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.491 qpair failed and we were unable to recover it. 00:22:39.491 [2024-12-10 04:10:33.758660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.491 [2024-12-10 04:10:33.758695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.491 [2024-12-10 04:10:33.758710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.491 [2024-12-10 04:10:33.758717] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.491 [2024-12-10 04:10:33.758722] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.491 [2024-12-10 04:10:33.769300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.491 qpair failed and we were unable to recover it. 00:22:39.491 [2024-12-10 04:10:33.778680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.491 [2024-12-10 04:10:33.778714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.491 [2024-12-10 04:10:33.778729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.491 [2024-12-10 04:10:33.778735] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.491 [2024-12-10 04:10:33.778741] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.491 [2024-12-10 04:10:33.788967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.491 qpair failed and we were unable to recover it. 00:22:39.491 [2024-12-10 04:10:33.798753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.491 [2024-12-10 04:10:33.798786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.491 [2024-12-10 04:10:33.798801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.491 [2024-12-10 04:10:33.798808] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.492 [2024-12-10 04:10:33.798813] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.492 [2024-12-10 04:10:33.808955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.492 qpair failed and we were unable to recover it. 00:22:39.492 [2024-12-10 04:10:33.818836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.492 [2024-12-10 04:10:33.818874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.492 [2024-12-10 04:10:33.818889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.492 [2024-12-10 04:10:33.818895] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.492 [2024-12-10 04:10:33.818901] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.492 [2024-12-10 04:10:33.829171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.492 qpair failed and we were unable to recover it. 00:22:39.492 [2024-12-10 04:10:33.838896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.492 [2024-12-10 04:10:33.838937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.492 [2024-12-10 04:10:33.838952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.492 [2024-12-10 04:10:33.838958] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.492 [2024-12-10 04:10:33.838964] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.492 [2024-12-10 04:10:33.849131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.492 qpair failed and we were unable to recover it. 00:22:39.492 [2024-12-10 04:10:33.858879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.492 [2024-12-10 04:10:33.858916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.492 [2024-12-10 04:10:33.858931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.492 [2024-12-10 04:10:33.858938] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.492 [2024-12-10 04:10:33.858944] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.492 [2024-12-10 04:10:33.869329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.492 qpair failed and we were unable to recover it. 00:22:39.751 [2024-12-10 04:10:33.878914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.751 [2024-12-10 04:10:33.878945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.751 [2024-12-10 04:10:33.878963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.751 [2024-12-10 04:10:33.878970] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.751 [2024-12-10 04:10:33.878975] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.751 [2024-12-10 04:10:33.889195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.751 qpair failed and we were unable to recover it. 00:22:39.751 [2024-12-10 04:10:33.899017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.751 [2024-12-10 04:10:33.899055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.751 [2024-12-10 04:10:33.899070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.751 [2024-12-10 04:10:33.899076] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.751 [2024-12-10 04:10:33.899082] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.751 [2024-12-10 04:10:33.909428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.751 qpair failed and we were unable to recover it. 00:22:39.751 [2024-12-10 04:10:33.919013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.751 [2024-12-10 04:10:33.919050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.751 [2024-12-10 04:10:33.919067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.751 [2024-12-10 04:10:33.919073] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.751 [2024-12-10 04:10:33.919079] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.751 [2024-12-10 04:10:33.929374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.751 qpair failed and we were unable to recover it. 00:22:39.751 [2024-12-10 04:10:33.939086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.751 [2024-12-10 04:10:33.939123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.752 [2024-12-10 04:10:33.939138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.752 [2024-12-10 04:10:33.939144] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.752 [2024-12-10 04:10:33.939150] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.752 [2024-12-10 04:10:33.949459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.752 qpair failed and we were unable to recover it. 00:22:39.752 [2024-12-10 04:10:33.959213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.752 [2024-12-10 04:10:33.959245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.752 [2024-12-10 04:10:33.959260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.752 [2024-12-10 04:10:33.959284] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.752 [2024-12-10 04:10:33.959290] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.752 [2024-12-10 04:10:33.969664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.752 qpair failed and we were unable to recover it. 00:22:39.752 [2024-12-10 04:10:33.979216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.752 [2024-12-10 04:10:33.979254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.752 [2024-12-10 04:10:33.979274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.752 [2024-12-10 04:10:33.979281] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.752 [2024-12-10 04:10:33.979287] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.752 [2024-12-10 04:10:33.989483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.752 qpair failed and we were unable to recover it. 00:22:39.752 [2024-12-10 04:10:33.999320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.752 [2024-12-10 04:10:33.999359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.752 [2024-12-10 04:10:33.999374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.752 [2024-12-10 04:10:33.999381] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.752 [2024-12-10 04:10:33.999387] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.752 [2024-12-10 04:10:34.009665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.752 qpair failed and we were unable to recover it. 00:22:39.752 [2024-12-10 04:10:34.019399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.752 [2024-12-10 04:10:34.019442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.752 [2024-12-10 04:10:34.019456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.752 [2024-12-10 04:10:34.019463] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.752 [2024-12-10 04:10:34.019468] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.752 [2024-12-10 04:10:34.029548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.752 qpair failed and we were unable to recover it. 00:22:39.752 [2024-12-10 04:10:34.039442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.752 [2024-12-10 04:10:34.039483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.752 [2024-12-10 04:10:34.039498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.752 [2024-12-10 04:10:34.039504] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.752 [2024-12-10 04:10:34.039510] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.752 [2024-12-10 04:10:34.049824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.752 qpair failed and we were unable to recover it. 00:22:39.752 [2024-12-10 04:10:34.059543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.752 [2024-12-10 04:10:34.059580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.752 [2024-12-10 04:10:34.059595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.752 [2024-12-10 04:10:34.059602] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.752 [2024-12-10 04:10:34.059607] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.752 [2024-12-10 04:10:34.069700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.752 qpair failed and we were unable to recover it. 00:22:39.752 [2024-12-10 04:10:34.079504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.752 [2024-12-10 04:10:34.079543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.752 [2024-12-10 04:10:34.079558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.752 [2024-12-10 04:10:34.079564] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.752 [2024-12-10 04:10:34.079570] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.752 [2024-12-10 04:10:34.089873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.752 qpair failed and we were unable to recover it. 00:22:39.752 [2024-12-10 04:10:34.099575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.752 [2024-12-10 04:10:34.099610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.752 [2024-12-10 04:10:34.099626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.752 [2024-12-10 04:10:34.099632] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.752 [2024-12-10 04:10:34.099638] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.752 [2024-12-10 04:10:34.109806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.752 qpair failed and we were unable to recover it. 00:22:39.752 [2024-12-10 04:10:34.119647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.752 [2024-12-10 04:10:34.119685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.752 [2024-12-10 04:10:34.119700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.752 [2024-12-10 04:10:34.119706] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.752 [2024-12-10 04:10:34.119712] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:39.752 [2024-12-10 04:10:34.129992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:39.752 qpair failed and we were unable to recover it. 00:22:40.012 [2024-12-10 04:10:34.139593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.012 [2024-12-10 04:10:34.139631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.012 [2024-12-10 04:10:34.139645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.012 [2024-12-10 04:10:34.139651] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.012 [2024-12-10 04:10:34.139657] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.012 [2024-12-10 04:10:34.150114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.012 qpair failed and we were unable to recover it. 00:22:40.012 [2024-12-10 04:10:34.159829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.012 [2024-12-10 04:10:34.159870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.012 [2024-12-10 04:10:34.159885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.012 [2024-12-10 04:10:34.159892] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.012 [2024-12-10 04:10:34.159898] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.012 [2024-12-10 04:10:34.170115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.012 qpair failed and we were unable to recover it. 00:22:40.012 [2024-12-10 04:10:34.179759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.012 [2024-12-10 04:10:34.179796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.012 [2024-12-10 04:10:34.179811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.012 [2024-12-10 04:10:34.179818] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.012 [2024-12-10 04:10:34.179824] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.012 [2024-12-10 04:10:34.190065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.012 qpair failed and we were unable to recover it. 00:22:40.012 [2024-12-10 04:10:34.199885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.012 [2024-12-10 04:10:34.199923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.012 [2024-12-10 04:10:34.199938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.012 [2024-12-10 04:10:34.199945] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.012 [2024-12-10 04:10:34.199951] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.012 [2024-12-10 04:10:34.210343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.012 qpair failed and we were unable to recover it. 00:22:40.012 [2024-12-10 04:10:34.219921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.012 [2024-12-10 04:10:34.219960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.012 [2024-12-10 04:10:34.219977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.012 [2024-12-10 04:10:34.219983] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.012 [2024-12-10 04:10:34.219989] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.012 [2024-12-10 04:10:34.230198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.012 qpair failed and we were unable to recover it. 00:22:40.012 [2024-12-10 04:10:34.240000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.012 [2024-12-10 04:10:34.240038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.012 [2024-12-10 04:10:34.240053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.012 [2024-12-10 04:10:34.240059] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.012 [2024-12-10 04:10:34.240065] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.012 [2024-12-10 04:10:34.250449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.012 qpair failed and we were unable to recover it. 00:22:40.012 [2024-12-10 04:10:34.260035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.012 [2024-12-10 04:10:34.260077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.012 [2024-12-10 04:10:34.260092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.012 [2024-12-10 04:10:34.260099] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.012 [2024-12-10 04:10:34.260104] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.012 [2024-12-10 04:10:34.270296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.012 qpair failed and we were unable to recover it. 00:22:40.012 [2024-12-10 04:10:34.280028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.012 [2024-12-10 04:10:34.280062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.012 [2024-12-10 04:10:34.280077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.012 [2024-12-10 04:10:34.280084] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.012 [2024-12-10 04:10:34.280089] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.012 [2024-12-10 04:10:34.290460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.012 qpair failed and we were unable to recover it. 00:22:40.012 [2024-12-10 04:10:34.300104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.012 [2024-12-10 04:10:34.300143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.012 [2024-12-10 04:10:34.300158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.012 [2024-12-10 04:10:34.300167] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.012 [2024-12-10 04:10:34.300173] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.012 [2024-12-10 04:10:34.310292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.012 qpair failed and we were unable to recover it. 00:22:40.013 [2024-12-10 04:10:34.320214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.013 [2024-12-10 04:10:34.320251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.013 [2024-12-10 04:10:34.320271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.013 [2024-12-10 04:10:34.320279] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.013 [2024-12-10 04:10:34.320284] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.013 [2024-12-10 04:10:34.330602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.013 qpair failed and we were unable to recover it. 00:22:40.013 [2024-12-10 04:10:34.340323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.013 [2024-12-10 04:10:34.340357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.013 [2024-12-10 04:10:34.340373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.013 [2024-12-10 04:10:34.340379] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.013 [2024-12-10 04:10:34.340385] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.013 [2024-12-10 04:10:34.350519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.013 qpair failed and we were unable to recover it. 00:22:40.013 [2024-12-10 04:10:34.360340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.013 [2024-12-10 04:10:34.360378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.013 [2024-12-10 04:10:34.360394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.013 [2024-12-10 04:10:34.360401] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.013 [2024-12-10 04:10:34.360407] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.013 [2024-12-10 04:10:34.370682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.013 qpair failed and we were unable to recover it. 00:22:40.013 [2024-12-10 04:10:34.380456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.013 [2024-12-10 04:10:34.380495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.013 [2024-12-10 04:10:34.380509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.013 [2024-12-10 04:10:34.380516] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.013 [2024-12-10 04:10:34.380522] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.013 [2024-12-10 04:10:34.390506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.013 qpair failed and we were unable to recover it. 00:22:40.273 [2024-12-10 04:10:34.400497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.273 [2024-12-10 04:10:34.400536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.273 [2024-12-10 04:10:34.400550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.273 [2024-12-10 04:10:34.400557] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.273 [2024-12-10 04:10:34.400563] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.273 [2024-12-10 04:10:34.411226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.273 qpair failed and we were unable to recover it. 00:22:40.273 [2024-12-10 04:10:34.420506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.273 [2024-12-10 04:10:34.420538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.273 [2024-12-10 04:10:34.420552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.273 [2024-12-10 04:10:34.420559] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.273 [2024-12-10 04:10:34.420564] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.273 [2024-12-10 04:10:34.430999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.273 qpair failed and we were unable to recover it. 00:22:40.273 [2024-12-10 04:10:34.440533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.273 [2024-12-10 04:10:34.440571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.273 [2024-12-10 04:10:34.440585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.273 [2024-12-10 04:10:34.440592] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.273 [2024-12-10 04:10:34.440598] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.273 [2024-12-10 04:10:34.450927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.273 qpair failed and we were unable to recover it. 00:22:40.273 [2024-12-10 04:10:34.460696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.273 [2024-12-10 04:10:34.460734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.273 [2024-12-10 04:10:34.460749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.273 [2024-12-10 04:10:34.460755] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.273 [2024-12-10 04:10:34.460761] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.273 [2024-12-10 04:10:34.470918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.273 qpair failed and we were unable to recover it. 00:22:40.273 [2024-12-10 04:10:34.480823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.273 [2024-12-10 04:10:34.480864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.273 [2024-12-10 04:10:34.480880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.273 [2024-12-10 04:10:34.480887] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.273 [2024-12-10 04:10:34.480892] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.273 [2024-12-10 04:10:34.490985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.273 qpair failed and we were unable to recover it. 00:22:40.273 [2024-12-10 04:10:34.500961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.273 [2024-12-10 04:10:34.500994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.273 [2024-12-10 04:10:34.501009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.273 [2024-12-10 04:10:34.501015] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.273 [2024-12-10 04:10:34.501020] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.273 [2024-12-10 04:10:34.511109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.273 qpair failed and we were unable to recover it. 00:22:40.273 [2024-12-10 04:10:34.520868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.273 [2024-12-10 04:10:34.520909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.273 [2024-12-10 04:10:34.520923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.273 [2024-12-10 04:10:34.520930] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.273 [2024-12-10 04:10:34.520935] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.273 [2024-12-10 04:10:34.531313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.273 qpair failed and we were unable to recover it. 00:22:40.273 [2024-12-10 04:10:34.540919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.273 [2024-12-10 04:10:34.540959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.273 [2024-12-10 04:10:34.540973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.273 [2024-12-10 04:10:34.540980] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.273 [2024-12-10 04:10:34.540986] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.273 [2024-12-10 04:10:34.551097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.273 qpair failed and we were unable to recover it. 00:22:40.273 [2024-12-10 04:10:34.560943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.273 [2024-12-10 04:10:34.560979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.273 [2024-12-10 04:10:34.560997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.273 [2024-12-10 04:10:34.561003] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.273 [2024-12-10 04:10:34.561009] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.273 [2024-12-10 04:10:34.571324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.273 qpair failed and we were unable to recover it. 00:22:40.273 [2024-12-10 04:10:34.581035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.273 [2024-12-10 04:10:34.581072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.273 [2024-12-10 04:10:34.581087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.273 [2024-12-10 04:10:34.581093] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.274 [2024-12-10 04:10:34.581098] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.274 [2024-12-10 04:10:34.591409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.274 qpair failed and we were unable to recover it. 00:22:40.274 [2024-12-10 04:10:34.601120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.274 [2024-12-10 04:10:34.601155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.274 [2024-12-10 04:10:34.601170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.274 [2024-12-10 04:10:34.601176] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.274 [2024-12-10 04:10:34.601182] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.274 [2024-12-10 04:10:34.611262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.274 qpair failed and we were unable to recover it. 00:22:40.274 [2024-12-10 04:10:34.621208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.274 [2024-12-10 04:10:34.621244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.274 [2024-12-10 04:10:34.621258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.274 [2024-12-10 04:10:34.621265] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.274 [2024-12-10 04:10:34.621275] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.274 [2024-12-10 04:10:34.631372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.274 qpair failed and we were unable to recover it. 00:22:40.274 [2024-12-10 04:10:34.641207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.274 [2024-12-10 04:10:34.641248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.274 [2024-12-10 04:10:34.641263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.274 [2024-12-10 04:10:34.641275] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.274 [2024-12-10 04:10:34.641284] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.274 [2024-12-10 04:10:34.651586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.274 qpair failed and we were unable to recover it. 00:22:40.534 [2024-12-10 04:10:34.661290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.534 [2024-12-10 04:10:34.661327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.534 [2024-12-10 04:10:34.661342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.534 [2024-12-10 04:10:34.661348] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.534 [2024-12-10 04:10:34.661354] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.534 [2024-12-10 04:10:34.671386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.534 qpair failed and we were unable to recover it. 00:22:40.534 [2024-12-10 04:10:34.681354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.534 [2024-12-10 04:10:34.681386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.534 [2024-12-10 04:10:34.681400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.534 [2024-12-10 04:10:34.681407] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.534 [2024-12-10 04:10:34.681412] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.534 [2024-12-10 04:10:34.691627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.534 qpair failed and we were unable to recover it. 00:22:40.534 [2024-12-10 04:10:34.701460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.534 [2024-12-10 04:10:34.701498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.534 [2024-12-10 04:10:34.701513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.534 [2024-12-10 04:10:34.701520] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.534 [2024-12-10 04:10:34.701525] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.534 [2024-12-10 04:10:34.711589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.534 qpair failed and we were unable to recover it. 00:22:40.534 [2024-12-10 04:10:34.721499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.534 [2024-12-10 04:10:34.721535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.534 [2024-12-10 04:10:34.721550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.534 [2024-12-10 04:10:34.721556] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.534 [2024-12-10 04:10:34.721562] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.534 [2024-12-10 04:10:34.731825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.534 qpair failed and we were unable to recover it. 00:22:40.534 [2024-12-10 04:10:34.741469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.534 [2024-12-10 04:10:34.741507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.534 [2024-12-10 04:10:34.741522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.534 [2024-12-10 04:10:34.741528] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.534 [2024-12-10 04:10:34.741534] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.534 [2024-12-10 04:10:34.751821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.534 qpair failed and we were unable to recover it. 00:22:40.534 [2024-12-10 04:10:34.761657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.534 [2024-12-10 04:10:34.761697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.534 [2024-12-10 04:10:34.761712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.534 [2024-12-10 04:10:34.761719] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.534 [2024-12-10 04:10:34.761725] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.534 [2024-12-10 04:10:34.771827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.534 qpair failed and we were unable to recover it. 00:22:40.534 [2024-12-10 04:10:34.781603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.534 [2024-12-10 04:10:34.781640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.534 [2024-12-10 04:10:34.781655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.534 [2024-12-10 04:10:34.781661] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.534 [2024-12-10 04:10:34.781667] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.534 [2024-12-10 04:10:34.791833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.534 qpair failed and we were unable to recover it. 00:22:40.534 [2024-12-10 04:10:34.801720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.534 [2024-12-10 04:10:34.801761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.534 [2024-12-10 04:10:34.801776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.534 [2024-12-10 04:10:34.801783] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.534 [2024-12-10 04:10:34.801788] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.534 [2024-12-10 04:10:34.811920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.534 qpair failed and we were unable to recover it. 00:22:40.534 [2024-12-10 04:10:34.821774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.534 [2024-12-10 04:10:34.821816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.534 [2024-12-10 04:10:34.821831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.534 [2024-12-10 04:10:34.821837] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.534 [2024-12-10 04:10:34.821843] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.534 [2024-12-10 04:10:34.832140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.534 qpair failed and we were unable to recover it. 00:22:40.534 [2024-12-10 04:10:34.841822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.534 [2024-12-10 04:10:34.841857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.534 [2024-12-10 04:10:34.841873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.534 [2024-12-10 04:10:34.841880] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.534 [2024-12-10 04:10:34.841886] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.534 [2024-12-10 04:10:34.851993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.534 qpair failed and we were unable to recover it. 00:22:40.534 [2024-12-10 04:10:34.861832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:40.534 [2024-12-10 04:10:34.861869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:40.534 [2024-12-10 04:10:34.861884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:40.534 [2024-12-10 04:10:34.861891] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:40.534 [2024-12-10 04:10:34.861897] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:40.535 [2024-12-10 04:10:34.872137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:40.535 qpair failed and we were unable to recover it. 00:22:41.923 Read completed with error (sct=0, sc=8) 00:22:41.923 starting I/O failed 00:22:41.923 Write completed with error (sct=0, sc=8) 00:22:41.923 starting I/O failed 00:22:41.923 Write completed with error (sct=0, sc=8) 00:22:41.923 starting I/O failed 00:22:41.923 Write completed with error (sct=0, sc=8) 00:22:41.923 starting I/O failed 00:22:41.923 Read completed with error (sct=0, sc=8) 00:22:41.923 starting I/O failed 00:22:41.923 Read completed with error (sct=0, sc=8) 00:22:41.923 starting I/O failed 00:22:41.923 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Write completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Write completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Write completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Write completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Write completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Write completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Write completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Write completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Write completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Write completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Read completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 Write completed with error (sct=0, sc=8) 00:22:41.924 starting I/O failed 00:22:41.924 [2024-12-10 04:10:35.877154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:41.924 [2024-12-10 04:10:35.884471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:41.924 [2024-12-10 04:10:35.884509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:41.924 [2024-12-10 04:10:35.884524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:41.924 [2024-12-10 04:10:35.884532] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:41.924 [2024-12-10 04:10:35.884538] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002b5f80 00:22:41.924 [2024-12-10 04:10:35.895148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:41.924 qpair failed and we were unable to recover it. 00:22:41.924 [2024-12-10 04:10:35.904818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:41.924 [2024-12-10 04:10:35.904855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:41.924 [2024-12-10 04:10:35.904870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:41.924 [2024-12-10 04:10:35.904877] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:41.924 [2024-12-10 04:10:35.904884] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002b5f80 00:22:41.924 [2024-12-10 04:10:35.915136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:41.924 qpair failed and we were unable to recover it. 00:22:41.924 [2024-12-10 04:10:35.924869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:41.924 [2024-12-10 04:10:35.924907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:41.924 [2024-12-10 04:10:35.924925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:41.924 [2024-12-10 04:10:35.924932] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:41.924 [2024-12-10 04:10:35.924939] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:22:41.924 [2024-12-10 04:10:35.935173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:41.924 qpair failed and we were unable to recover it. 00:22:41.924 [2024-12-10 04:10:35.945004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:41.924 [2024-12-10 04:10:35.945043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:41.924 [2024-12-10 04:10:35.945057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:41.924 [2024-12-10 04:10:35.945068] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:41.924 [2024-12-10 04:10:35.945074] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:22:41.924 [2024-12-10 04:10:35.955391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:41.924 qpair failed and we were unable to recover it. 00:22:41.924 [2024-12-10 04:10:35.955516] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:22:41.924 A controller has encountered a failure and is being reset. 00:22:41.924 [2024-12-10 04:10:35.965015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:41.924 [2024-12-10 04:10:35.965057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:41.924 [2024-12-10 04:10:35.965079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:41.924 [2024-12-10 04:10:35.965089] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:41.924 [2024-12-10 04:10:35.965097] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2b40 00:22:41.924 [2024-12-10 04:10:35.975469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:41.924 qpair failed and we were unable to recover it. 00:22:41.924 [2024-12-10 04:10:35.985075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:41.924 [2024-12-10 04:10:35.985112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:41.924 [2024-12-10 04:10:35.985127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:41.924 [2024-12-10 04:10:35.985134] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:41.924 [2024-12-10 04:10:35.985139] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2b40 00:22:41.924 [2024-12-10 04:10:35.995406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:41.924 qpair failed and we were unable to recover it. 00:22:41.924 [2024-12-10 04:10:35.995570] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:22:41.924 [2024-12-10 04:10:36.029252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:22:41.924 Controller properly reset. 00:22:41.924 Initializing NVMe Controllers 00:22:41.924 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:41.924 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:41.924 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:22:41.924 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:22:41.924 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:22:41.924 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:22:41.924 Initialization complete. Launching workers. 00:22:41.924 Starting thread on core 1 00:22:41.924 Starting thread on core 2 00:22:41.924 Starting thread on core 3 00:22:41.924 Starting thread on core 0 00:22:41.924 04:10:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:22:41.924 00:22:41.924 real 0m11.943s 00:22:41.924 user 0m25.534s 00:22:41.924 sys 0m2.260s 00:22:41.924 04:10:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:41.924 04:10:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.924 ************************************ 00:22:41.924 END TEST nvmf_target_disconnect_tc2 00:22:41.924 ************************************ 00:22:41.924 04:10:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:22:41.924 04:10:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:22:41.924 04:10:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:41.924 04:10:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:41.924 04:10:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:41.924 ************************************ 00:22:41.924 START TEST nvmf_target_disconnect_tc3 00:22:41.924 ************************************ 00:22:41.924 04:10:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:22:41.924 04:10:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=880125 00:22:41.924 04:10:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:22:41.924 04:10:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:22:43.831 04:10:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 878767 00:22:43.831 04:10:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:22:45.209 Read completed with error (sct=0, sc=8) 00:22:45.209 starting I/O failed 00:22:45.209 Read completed with error (sct=0, sc=8) 00:22:45.209 starting I/O failed 00:22:45.209 Write completed with error (sct=0, sc=8) 00:22:45.209 starting I/O failed 00:22:45.209 Read completed with error (sct=0, sc=8) 00:22:45.209 starting I/O failed 00:22:45.210 Write completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Write completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Write completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Write completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Write completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Write completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Write completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Write completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Write completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Write completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 Read completed with error (sct=0, sc=8) 00:22:45.210 starting I/O failed 00:22:45.210 [2024-12-10 04:10:39.311557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:22:45.778 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 878767 Killed "${NVMF_APP[@]}" "$@" 00:22:45.778 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:22:45.778 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:45.778 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:45.778 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:45.778 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.038 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=880807 00:22:46.038 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 880807 00:22:46.038 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:46.038 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 880807 ']' 00:22:46.038 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.038 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.038 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.038 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.038 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.038 [2024-12-10 04:10:40.211665] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:22:46.038 [2024-12-10 04:10:40.211714] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.038 [2024-12-10 04:10:40.286034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Write completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Write completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Write completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Write completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Write completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Write completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Write completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Write completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Write completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Write completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Write completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Write completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Write completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Read completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 Write completed with error (sct=0, sc=8) 00:22:46.038 starting I/O failed 00:22:46.038 [2024-12-10 04:10:40.316541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:22:46.038 [2024-12-10 04:10:40.323807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.038 [2024-12-10 04:10:40.323833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.038 [2024-12-10 04:10:40.323841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.038 [2024-12-10 04:10:40.323848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.038 [2024-12-10 04:10:40.323854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.038 [2024-12-10 04:10:40.325299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:46.038 [2024-12-10 04:10:40.325405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:46.038 [2024-12-10 04:10:40.325512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:46.038 [2024-12-10 04:10:40.325513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.298 Malloc0 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.298 [2024-12-10 04:10:40.532622] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6f3170/0x6fee50) succeed. 00:22:46.298 [2024-12-10 04:10:40.541248] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6f4800/0x7404f0) succeed. 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.298 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.298 [2024-12-10 04:10:40.679160] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:22:46.557 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.557 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:22:46.557 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.557 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.557 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.557 04:10:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 880125 00:22:47.125 Read completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Read completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Read completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Read completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Read completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Read completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Read completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Read completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Read completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Read completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Read completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Read completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Read completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Write completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 Read completed with error (sct=0, sc=8) 00:22:47.125 starting I/O failed 00:22:47.125 [2024-12-10 04:10:41.321427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:22:47.125 [2024-12-10 04:10:41.322955] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:47.125 [2024-12-10 04:10:41.322973] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:47.125 [2024-12-10 04:10:41.322980] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:48.062 [2024-12-10 04:10:42.326732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:22:48.062 qpair failed and we were unable to recover it. 00:22:48.062 [2024-12-10 04:10:42.328113] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:48.062 [2024-12-10 04:10:42.328128] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:48.062 [2024-12-10 04:10:42.328134] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:49.000 [2024-12-10 04:10:43.331983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:22:49.000 qpair failed and we were unable to recover it. 00:22:49.000 [2024-12-10 04:10:43.333291] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:49.000 [2024-12-10 04:10:43.333306] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:49.000 [2024-12-10 04:10:43.333312] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:50.378 [2024-12-10 04:10:44.337141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:22:50.378 qpair failed and we were unable to recover it. 00:22:50.378 [2024-12-10 04:10:44.338493] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:50.378 [2024-12-10 04:10:44.338509] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:50.378 [2024-12-10 04:10:44.338514] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:51.313 [2024-12-10 04:10:45.342307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:22:51.313 qpair failed and we were unable to recover it. 00:22:51.313 [2024-12-10 04:10:45.343742] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:51.313 [2024-12-10 04:10:45.343758] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:51.313 [2024-12-10 04:10:45.343764] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:52.249 [2024-12-10 04:10:46.347529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:22:52.249 qpair failed and we were unable to recover it. 00:22:52.249 [2024-12-10 04:10:46.348845] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:52.249 [2024-12-10 04:10:46.348861] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:52.249 [2024-12-10 04:10:46.348870] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:53.185 [2024-12-10 04:10:47.352654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:22:53.185 qpair failed and we were unable to recover it. 00:22:53.185 [2024-12-10 04:10:47.353970] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:53.185 [2024-12-10 04:10:47.353986] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:53.185 [2024-12-10 04:10:47.353992] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:22:54.123 [2024-12-10 04:10:48.357851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:22:54.123 qpair failed and we were unable to recover it. 00:22:54.123 [2024-12-10 04:10:48.359294] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:54.123 [2024-12-10 04:10:48.359313] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:54.123 [2024-12-10 04:10:48.359319] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:22:55.060 [2024-12-10 04:10:49.363024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:22:55.060 qpair failed and we were unable to recover it. 00:22:55.060 [2024-12-10 04:10:49.364438] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:55.060 [2024-12-10 04:10:49.364453] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:55.060 [2024-12-10 04:10:49.364458] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:22:55.997 [2024-12-10 04:10:50.368185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:22:55.997 qpair failed and we were unable to recover it. 00:22:55.997 [2024-12-10 04:10:50.368314] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:22:55.997 A controller has encountered a failure and is being reset. 00:22:55.997 Resorting to new failover address 192.168.100.9 00:22:55.997 [2024-12-10 04:10:50.369797] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:55.997 [2024-12-10 04:10:50.369819] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:55.997 [2024-12-10 04:10:50.369827] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2b40 00:22:57.375 [2024-12-10 04:10:51.373481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.375 qpair failed and we were unable to recover it. 00:22:57.375 [2024-12-10 04:10:51.374803] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:57.375 [2024-12-10 04:10:51.374817] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:57.375 [2024-12-10 04:10:51.374822] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2b40 00:22:58.311 [2024-12-10 04:10:52.378687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:22:58.311 qpair failed and we were unable to recover it. 00:22:58.311 [2024-12-10 04:10:52.378806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:58.311 [2024-12-10 04:10:52.378918] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:22:58.311 [2024-12-10 04:10:52.381131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:22:58.311 Controller properly reset. 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Read completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Read completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Read completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Read completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Read completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Read completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Read completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Read completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Read completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Read completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Read completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Read completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Read completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Read completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 Write completed with error (sct=0, sc=8) 00:22:59.248 starting I/O failed 00:22:59.248 [2024-12-10 04:10:53.425792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:59.248 Initializing NVMe Controllers 00:22:59.248 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.248 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.248 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:22:59.248 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:22:59.248 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:22:59.248 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:22:59.248 Initialization complete. Launching workers. 00:22:59.248 Starting thread on core 1 00:22:59.248 Starting thread on core 2 00:22:59.248 Starting thread on core 3 00:22:59.248 Starting thread on core 0 00:22:59.248 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:22:59.248 00:22:59.248 real 0m17.328s 00:22:59.248 user 1m0.962s 00:22:59.248 sys 0m4.047s 00:22:59.248 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.248 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:59.248 ************************************ 00:22:59.248 END TEST nvmf_target_disconnect_tc3 00:22:59.248 ************************************ 00:22:59.248 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:22:59.248 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:22:59.248 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:59.248 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:22:59.248 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:59.248 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:59.248 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:22:59.248 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:59.248 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:59.248 rmmod nvme_rdma 00:22:59.248 rmmod nvme_fabrics 00:22:59.249 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:59.249 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:22:59.249 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:22:59.249 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 880807 ']' 00:22:59.249 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 880807 00:22:59.249 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 880807 ']' 00:22:59.249 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 880807 00:22:59.249 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:22:59.249 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.249 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 880807 00:22:59.249 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:22:59.249 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:22:59.508 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 880807' 00:22:59.508 killing process with pid 880807 00:22:59.508 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 880807 00:22:59.508 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 880807 00:22:59.508 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:59.508 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:59.508 00:22:59.508 real 0m36.537s 00:22:59.508 user 2m22.866s 00:22:59.508 sys 0m10.963s 00:22:59.508 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.508 04:10:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:59.508 ************************************ 00:22:59.508 END TEST nvmf_target_disconnect 00:22:59.508 ************************************ 00:22:59.767 04:10:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:59.767 00:22:59.767 real 5m7.300s 00:22:59.767 user 12m23.482s 00:22:59.767 sys 1m25.176s 00:22:59.767 04:10:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.767 04:10:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.767 ************************************ 00:22:59.767 END TEST nvmf_host 00:22:59.767 ************************************ 00:22:59.767 04:10:53 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:22:59.767 00:22:59.767 real 16m3.387s 00:22:59.767 user 40m0.380s 00:22:59.767 sys 4m38.020s 00:22:59.767 04:10:53 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.767 04:10:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:59.767 ************************************ 00:22:59.767 END TEST nvmf_rdma 00:22:59.767 ************************************ 00:22:59.767 04:10:53 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:22:59.767 04:10:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:59.767 04:10:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.767 04:10:53 -- common/autotest_common.sh@10 -- # set +x 00:22:59.767 ************************************ 00:22:59.767 START TEST spdkcli_nvmf_rdma 00:22:59.767 ************************************ 00:22:59.767 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:22:59.767 * Looking for test storage... 00:22:59.768 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:22:59.768 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:59.768 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:59.768 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:00.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.030 --rc genhtml_branch_coverage=1 00:23:00.030 --rc genhtml_function_coverage=1 00:23:00.030 --rc genhtml_legend=1 00:23:00.030 --rc geninfo_all_blocks=1 00:23:00.030 --rc geninfo_unexecuted_blocks=1 00:23:00.030 00:23:00.030 ' 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:00.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.030 --rc genhtml_branch_coverage=1 00:23:00.030 --rc genhtml_function_coverage=1 00:23:00.030 --rc genhtml_legend=1 00:23:00.030 --rc geninfo_all_blocks=1 00:23:00.030 --rc geninfo_unexecuted_blocks=1 00:23:00.030 00:23:00.030 ' 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:00.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.030 --rc genhtml_branch_coverage=1 00:23:00.030 --rc genhtml_function_coverage=1 00:23:00.030 --rc genhtml_legend=1 00:23:00.030 --rc geninfo_all_blocks=1 00:23:00.030 --rc geninfo_unexecuted_blocks=1 00:23:00.030 00:23:00.030 ' 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:00.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.030 --rc genhtml_branch_coverage=1 00:23:00.030 --rc genhtml_function_coverage=1 00:23:00.030 --rc genhtml_legend=1 00:23:00.030 --rc geninfo_all_blocks=1 00:23:00.030 --rc geninfo_unexecuted_blocks=1 00:23:00.030 00:23:00.030 ' 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.030 04:10:54 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:00.031 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=883396 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 883396 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 883396 ']' 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.031 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:00.031 [2024-12-10 04:10:54.232310] Starting SPDK v25.01-pre git sha1 86d35c37a / DPDK 24.03.0 initialization... 00:23:00.031 [2024-12-10 04:10:54.232358] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883396 ] 00:23:00.031 [2024-12-10 04:10:54.289708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:00.031 [2024-12-10 04:10:54.330568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.031 [2024-12-10 04:10:54.330572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:23:00.290 04:10:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:06.853 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:06.853 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.853 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:06.853 Found net devices under 0000:18:00.0: mlx_0_0 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:06.854 Found net devices under 0000:18:00.1: mlx_0_1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:06.854 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:06.854 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:23:06.854 altname enp24s0f0np0 00:23:06.854 altname ens785f0np0 00:23:06.854 inet 192.168.100.8/24 scope global mlx_0_0 00:23:06.854 valid_lft forever preferred_lft forever 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:06.854 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:06.854 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:23:06.854 altname enp24s0f1np1 00:23:06.854 altname ens785f1np1 00:23:06.854 inet 192.168.100.9/24 scope global mlx_0_1 00:23:06.854 valid_lft forever preferred_lft forever 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:06.854 192.168.100.9' 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:06.854 192.168.100.9' 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:06.854 192.168.100.9' 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:06.854 04:11:00 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:23:06.854 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:23:06.854 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:23:06.854 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:23:06.854 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:23:06.854 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:23:06.854 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:23:06.854 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:23:06.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:23:06.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:23:06.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:23:06.854 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:06.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:23:06.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:23:06.854 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:06.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:23:06.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:23:06.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:23:06.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:23:06.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:06.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:23:06.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:23:06.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:23:06.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:23:06.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:06.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:23:06.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:23:06.855 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:23:06.855 ' 00:23:08.756 [2024-12-10 04:11:02.701959] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f6d440/0x1f7b440) succeed. 00:23:08.756 [2024-12-10 04:11:02.710382] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f6eb20/0x1ffb480) succeed. 00:23:09.693 [2024-12-10 04:11:03.977943] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:23:12.227 [2024-12-10 04:11:06.208748] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:23:14.130 [2024-12-10 04:11:08.126826] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:23:15.508 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:23:15.508 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:23:15.508 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:23:15.508 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:23:15.508 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:23:15.508 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:23:15.508 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:23:15.508 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:23:15.508 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:23:15.508 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:23:15.508 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:23:15.508 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:23:15.508 04:11:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:23:15.508 04:11:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.508 04:11:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:15.508 04:11:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:23:15.508 04:11:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.508 04:11:09 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:15.508 04:11:09 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:23:15.508 04:11:09 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:23:15.767 04:11:10 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:23:16.025 04:11:10 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:23:16.025 04:11:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:23:16.025 04:11:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.025 04:11:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:16.025 04:11:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:23:16.025 04:11:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:16.025 04:11:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:16.025 04:11:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:23:16.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:23:16.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:23:16.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:23:16.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:23:16.026 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:23:16.026 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:23:16.026 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:23:16.026 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:23:16.026 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:23:16.026 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:23:16.026 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:23:16.026 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:23:16.026 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:23:16.026 ' 00:23:21.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:23:21.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:23:21.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:21.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:23:21.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:23:21.297 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:23:21.297 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:23:21.297 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:21.297 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:23:21.297 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:23:21.297 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:23:21.297 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:23:21.297 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:23:21.297 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 883396 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 883396 ']' 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 883396 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 883396 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 883396' 00:23:21.297 killing process with pid 883396 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 883396 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 883396 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:21.297 04:11:15 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:21.297 rmmod nvme_rdma 00:23:21.297 rmmod nvme_fabrics 00:23:21.298 04:11:15 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:21.298 04:11:15 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:23:21.298 04:11:15 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:23:21.298 04:11:15 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:21.298 04:11:15 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:21.298 04:11:15 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:21.298 00:23:21.298 real 0m21.621s 00:23:21.298 user 0m45.781s 00:23:21.298 sys 0m5.083s 00:23:21.298 04:11:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.298 04:11:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:21.298 ************************************ 00:23:21.298 END TEST spdkcli_nvmf_rdma 00:23:21.298 ************************************ 00:23:21.298 04:11:15 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:21.298 04:11:15 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:21.298 04:11:15 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:21.298 04:11:15 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:21.298 04:11:15 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:21.298 04:11:15 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:21.298 04:11:15 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:21.298 04:11:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:21.298 04:11:15 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:21.298 04:11:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:23:21.298 04:11:15 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:23:21.298 04:11:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:23:21.298 04:11:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:23:21.298 04:11:15 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:23:21.298 04:11:15 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:23:21.298 04:11:15 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:23:21.298 04:11:15 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:23:21.298 04:11:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.298 04:11:15 -- common/autotest_common.sh@10 -- # set +x 00:23:21.298 04:11:15 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:23:21.298 04:11:15 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:23:21.298 04:11:15 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:23:21.298 04:11:15 -- common/autotest_common.sh@10 -- # set +x 00:23:26.571 INFO: APP EXITING 00:23:26.571 INFO: killing all VMs 00:23:26.571 INFO: killing vhost app 00:23:26.571 WARN: no vhost pid file found 00:23:26.571 INFO: EXIT DONE 00:23:29.105 Waiting for block devices as requested 00:23:29.105 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:29.105 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:29.364 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:29.364 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:29.364 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:29.364 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:29.662 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:29.662 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:29.662 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:29.662 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:29.961 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:29.961 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:29.961 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:29.961 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:30.232 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:30.232 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:30.232 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:23:34.417 Cleaning 00:23:34.417 Removing: /var/run/dpdk/spdk0/config 00:23:34.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:34.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:34.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:34.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:34.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:23:34.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:23:34.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:23:34.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:23:34.417 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:34.417 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:34.417 Removing: /var/run/dpdk/spdk1/config 00:23:34.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:34.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:34.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:34.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:34.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:23:34.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:23:34.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:23:34.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:23:34.417 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:34.417 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:34.417 Removing: /var/run/dpdk/spdk1/mp_socket 00:23:34.417 Removing: /var/run/dpdk/spdk2/config 00:23:34.417 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:34.417 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:34.417 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:34.418 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:34.418 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:23:34.418 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:23:34.418 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:23:34.418 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:23:34.418 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:34.676 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:34.676 Removing: /var/run/dpdk/spdk3/config 00:23:34.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:34.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:34.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:34.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:34.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:23:34.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:23:34.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:23:34.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:23:34.676 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:34.676 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:34.676 Removing: /var/run/dpdk/spdk4/config 00:23:34.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:34.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:34.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:34.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:34.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:23:34.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:23:34.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:23:34.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:23:34.676 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:34.676 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:34.676 Removing: /dev/shm/bdevperf_trace.pid630572 00:23:34.676 Removing: /dev/shm/bdev_svc_trace.1 00:23:34.676 Removing: /dev/shm/nvmf_trace.0 00:23:34.676 Removing: /dev/shm/spdk_tgt_trace.pid585831 00:23:34.676 Removing: /var/run/dpdk/spdk0 00:23:34.676 Removing: /var/run/dpdk/spdk1 00:23:34.676 Removing: /var/run/dpdk/spdk2 00:23:34.676 Removing: /var/run/dpdk/spdk3 00:23:34.676 Removing: /var/run/dpdk/spdk4 00:23:34.676 Removing: /var/run/dpdk/spdk_pid582559 00:23:34.676 Removing: /var/run/dpdk/spdk_pid584102 00:23:34.676 Removing: /var/run/dpdk/spdk_pid585831 00:23:34.676 Removing: /var/run/dpdk/spdk_pid586285 00:23:34.676 Removing: /var/run/dpdk/spdk_pid587354 00:23:34.676 Removing: /var/run/dpdk/spdk_pid587572 00:23:34.676 Removing: /var/run/dpdk/spdk_pid588557 00:23:34.676 Removing: /var/run/dpdk/spdk_pid588736 00:23:34.676 Removing: /var/run/dpdk/spdk_pid588931 00:23:34.676 Removing: /var/run/dpdk/spdk_pid594294 00:23:34.676 Removing: /var/run/dpdk/spdk_pid596408 00:23:34.676 Removing: /var/run/dpdk/spdk_pid597123 00:23:34.676 Removing: /var/run/dpdk/spdk_pid597440 00:23:34.676 Removing: /var/run/dpdk/spdk_pid597616 00:23:34.676 Removing: /var/run/dpdk/spdk_pid597934 00:23:34.676 Removing: /var/run/dpdk/spdk_pid598216 00:23:34.676 Removing: /var/run/dpdk/spdk_pid598496 00:23:34.676 Removing: /var/run/dpdk/spdk_pid598808 00:23:34.676 Removing: /var/run/dpdk/spdk_pid599407 00:23:34.676 Removing: /var/run/dpdk/spdk_pid602608 00:23:34.676 Removing: /var/run/dpdk/spdk_pid602828 00:23:34.676 Removing: /var/run/dpdk/spdk_pid603116 00:23:34.676 Removing: /var/run/dpdk/spdk_pid603134 00:23:34.676 Removing: /var/run/dpdk/spdk_pid603669 00:23:34.676 Removing: /var/run/dpdk/spdk_pid603687 00:23:34.676 Removing: /var/run/dpdk/spdk_pid604237 00:23:34.676 Removing: /var/run/dpdk/spdk_pid604249 00:23:34.676 Removing: /var/run/dpdk/spdk_pid604540 00:23:34.676 Removing: /var/run/dpdk/spdk_pid604703 00:23:34.676 Removing: /var/run/dpdk/spdk_pid604835 00:23:34.676 Removing: /var/run/dpdk/spdk_pid605033 00:23:34.676 Removing: /var/run/dpdk/spdk_pid605474 00:23:34.676 Removing: /var/run/dpdk/spdk_pid605754 00:23:34.676 Removing: /var/run/dpdk/spdk_pid606086 00:23:34.676 Removing: /var/run/dpdk/spdk_pid610051 00:23:34.676 Removing: /var/run/dpdk/spdk_pid614123 00:23:34.676 Removing: /var/run/dpdk/spdk_pid624765 00:23:34.676 Removing: /var/run/dpdk/spdk_pid625695 00:23:34.935 Removing: /var/run/dpdk/spdk_pid630572 00:23:34.935 Removing: /var/run/dpdk/spdk_pid630872 00:23:34.935 Removing: /var/run/dpdk/spdk_pid634853 00:23:34.935 Removing: /var/run/dpdk/spdk_pid641020 00:23:34.935 Removing: /var/run/dpdk/spdk_pid643732 00:23:34.935 Removing: /var/run/dpdk/spdk_pid654433 00:23:34.935 Removing: /var/run/dpdk/spdk_pid678920 00:23:34.935 Removing: /var/run/dpdk/spdk_pid682705 00:23:34.935 Removing: /var/run/dpdk/spdk_pid725791 00:23:34.935 Removing: /var/run/dpdk/spdk_pid731252 00:23:34.935 Removing: /var/run/dpdk/spdk_pid736798 00:23:34.935 Removing: /var/run/dpdk/spdk_pid745504 00:23:34.935 Removing: /var/run/dpdk/spdk_pid785673 00:23:34.935 Removing: /var/run/dpdk/spdk_pid786611 00:23:34.935 Removing: /var/run/dpdk/spdk_pid787571 00:23:34.935 Removing: /var/run/dpdk/spdk_pid788743 00:23:34.935 Removing: /var/run/dpdk/spdk_pid793382 00:23:34.935 Removing: /var/run/dpdk/spdk_pid799729 00:23:34.935 Removing: /var/run/dpdk/spdk_pid807374 00:23:34.935 Removing: /var/run/dpdk/spdk_pid808239 00:23:34.935 Removing: /var/run/dpdk/spdk_pid809286 00:23:34.935 Removing: /var/run/dpdk/spdk_pid810328 00:23:34.935 Removing: /var/run/dpdk/spdk_pid810647 00:23:34.935 Removing: /var/run/dpdk/spdk_pid815170 00:23:34.935 Removing: /var/run/dpdk/spdk_pid815178 00:23:34.935 Removing: /var/run/dpdk/spdk_pid819619 00:23:34.935 Removing: /var/run/dpdk/spdk_pid820306 00:23:34.935 Removing: /var/run/dpdk/spdk_pid820840 00:23:34.935 Removing: /var/run/dpdk/spdk_pid821620 00:23:34.935 Removing: /var/run/dpdk/spdk_pid821632 00:23:34.935 Removing: /var/run/dpdk/spdk_pid826747 00:23:34.935 Removing: /var/run/dpdk/spdk_pid827404 00:23:34.935 Removing: /var/run/dpdk/spdk_pid831592 00:23:34.935 Removing: /var/run/dpdk/spdk_pid834482 00:23:34.935 Removing: /var/run/dpdk/spdk_pid839942 00:23:34.935 Removing: /var/run/dpdk/spdk_pid850366 00:23:34.935 Removing: /var/run/dpdk/spdk_pid850370 00:23:34.935 Removing: /var/run/dpdk/spdk_pid871356 00:23:34.935 Removing: /var/run/dpdk/spdk_pid871639 00:23:34.935 Removing: /var/run/dpdk/spdk_pid877669 00:23:34.935 Removing: /var/run/dpdk/spdk_pid877980 00:23:34.935 Removing: /var/run/dpdk/spdk_pid880125 00:23:34.935 Removing: /var/run/dpdk/spdk_pid883396 00:23:34.935 Clean 00:23:34.935 04:11:29 -- common/autotest_common.sh@1453 -- # return 0 00:23:34.935 04:11:29 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:23:34.935 04:11:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.935 04:11:29 -- common/autotest_common.sh@10 -- # set +x 00:23:35.193 04:11:29 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:23:35.193 04:11:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.193 04:11:29 -- common/autotest_common.sh@10 -- # set +x 00:23:35.193 04:11:29 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:23:35.193 04:11:29 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:23:35.193 04:11:29 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:23:35.193 04:11:29 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:23:35.193 04:11:29 -- spdk/autotest.sh@398 -- # hostname 00:23:35.193 04:11:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-37 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:23:35.193 geninfo: WARNING: invalid characters removed from testname! 00:23:53.277 04:11:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:55.812 04:11:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:57.190 04:11:51 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:58.574 04:11:52 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:24:00.479 04:11:54 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:24:01.855 04:11:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:24:03.761 04:11:57 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:03.761 04:11:57 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:03.761 04:11:57 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:24:03.761 04:11:57 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:03.761 04:11:57 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:03.761 04:11:57 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:24:03.761 + [[ -n 504175 ]] 00:24:03.761 + sudo kill 504175 00:24:03.770 [Pipeline] } 00:24:03.785 [Pipeline] // stage 00:24:03.790 [Pipeline] } 00:24:03.804 [Pipeline] // timeout 00:24:03.808 [Pipeline] } 00:24:03.822 [Pipeline] // catchError 00:24:03.828 [Pipeline] } 00:24:03.844 [Pipeline] // wrap 00:24:03.851 [Pipeline] } 00:24:03.864 [Pipeline] // catchError 00:24:03.873 [Pipeline] stage 00:24:03.875 [Pipeline] { (Epilogue) 00:24:03.888 [Pipeline] catchError 00:24:03.890 [Pipeline] { 00:24:03.903 [Pipeline] echo 00:24:03.904 Cleanup processes 00:24:03.910 [Pipeline] sh 00:24:04.197 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:24:04.197 898650 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:24:04.210 [Pipeline] sh 00:24:04.495 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:24:04.495 ++ grep -v 'sudo pgrep' 00:24:04.495 ++ awk '{print $1}' 00:24:04.495 + sudo kill -9 00:24:04.495 + true 00:24:04.507 [Pipeline] sh 00:24:04.792 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:12.922 [Pipeline] sh 00:24:13.207 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:13.207 Artifacts sizes are good 00:24:13.222 [Pipeline] archiveArtifacts 00:24:13.229 Archiving artifacts 00:24:13.353 [Pipeline] sh 00:24:13.640 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:24:13.654 [Pipeline] cleanWs 00:24:13.664 [WS-CLEANUP] Deleting project workspace... 00:24:13.664 [WS-CLEANUP] Deferred wipeout is used... 00:24:13.671 [WS-CLEANUP] done 00:24:13.673 [Pipeline] } 00:24:13.690 [Pipeline] // catchError 00:24:13.702 [Pipeline] sh 00:24:14.004 + logger -p user.info -t JENKINS-CI 00:24:14.013 [Pipeline] } 00:24:14.026 [Pipeline] // stage 00:24:14.031 [Pipeline] } 00:24:14.045 [Pipeline] // node 00:24:14.050 [Pipeline] End of Pipeline 00:24:14.110 Finished: SUCCESS